Test Report: Docker_Linux_crio_arm64 17044

                    
                      df168c2d81a1825740328057ca29cb976d1a3614:2023-08-12:30542
                    
                

Test fail (7/304)

Order failed test Duration
32 TestAddons/parallel/Ingress 170.05
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.47
211 TestMultiNode/serial/PingHostFrom2Pods 4.61
232 TestRunningBinaryUpgrade 69.56
235 TestMissingContainerUpgrade 185.53
247 TestStoppedBinaryUpgrade/Upgrade 82.81
258 TestPause/serial/SecondStartNoReconfiguration 52.92
x
+
TestAddons/parallel/Ingress (170.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-557401 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-557401 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-557401 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ece24bbd-fd6a-4146-a61a-7a43c30e0d4a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ece24bbd-fd6a-4146-a61a-7a43c30e0d4a] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.019206105s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-557401 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.412198941s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-557401 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-557401 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.010500548s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.048370866s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-557401 addons disable ingress-dns --alsologtostderr -v=1: (1.195252863s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-557401 addons disable ingress --alsologtostderr -v=1: (7.802236465s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-557401
helpers_test.go:235: (dbg) docker inspect addons-557401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928",
	        "Created": "2023-08-11T23:02:16.060449273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:02:16.400598244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:abe4482d178dd08cce0cdcb8e444349673c3edfa8e7d6462144a8d9173479eb6",
	        "ResolvConfPath": "/var/lib/docker/containers/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/hostname",
	        "HostsPath": "/var/lib/docker/containers/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/hosts",
	        "LogPath": "/var/lib/docker/containers/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928-json.log",
	        "Name": "/addons-557401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-557401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-557401",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/42782218546b7d75bfbd9e8ae2ffcfbbd19b1dd514ec91a7d6ec49224a0caeda-init/diff:/var/lib/docker/overlay2/9f8bf17bd2eed1bf502486fc30f9be0589884e58aed50b5fbf77bc48ebc9a592/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42782218546b7d75bfbd9e8ae2ffcfbbd19b1dd514ec91a7d6ec49224a0caeda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42782218546b7d75bfbd9e8ae2ffcfbbd19b1dd514ec91a7d6ec49224a0caeda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42782218546b7d75bfbd9e8ae2ffcfbbd19b1dd514ec91a7d6ec49224a0caeda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-557401",
	                "Source": "/var/lib/docker/volumes/addons-557401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-557401",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-557401",
	                "name.minikube.sigs.k8s.io": "addons-557401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4859a3e074fbef324602e0879ab9a495a9ca6f0faf7dcad2f351123d33e7380e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4859a3e074fb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-557401": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "047bc6397f6f",
	                        "addons-557401"
	                    ],
	                    "NetworkID": "eac321880018004b369459128d7d9f31e73adc013da6318236673c780704cf12",
	                    "EndpointID": "b153a49ec89de89c6d1e54e9588c4d9c782e418acea128988398f4fa1913c521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-557401 -n addons-557401
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-557401 logs -n 25: (1.563684687s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-038476   | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |                     |
	|         | -p download-only-038476           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-038476   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |                     |
	|         | -p download-only-038476           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-038476   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |                     |
	|         | -p download-only-038476           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:01 UTC |
	| delete  | -p download-only-038476           | download-only-038476   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:01 UTC |
	| delete  | -p download-only-038476           | download-only-038476   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:01 UTC |
	| start   | --download-only -p                | download-docker-499597 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |                     |
	|         | download-docker-499597            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p download-docker-499597         | download-docker-499597 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:01 UTC |
	| start   | --download-only -p                | binary-mirror-747851   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |                     |
	|         | binary-mirror-747851              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39489            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-747851           | binary-mirror-747851   | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:01 UTC |
	| start   | -p addons-557401                  | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC | 11 Aug 23 23:04 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	|         | addons-557401                     |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	|         | -p addons-557401                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-557401 ip                  | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	| addons  | addons-557401 addons disable      | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-557401 addons              | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:04 UTC | 11 Aug 23 23:04 UTC |
	|         | addons-557401                     |                        |         |         |                     |                     |
	| ssh     | addons-557401 ssh curl -s         | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| addons  | addons-557401 addons              | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:05 UTC | 11 Aug 23 23:05 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-557401 addons              | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:05 UTC | 11 Aug 23 23:05 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-557401 ip                  | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:07 UTC | 11 Aug 23 23:07 UTC |
	| addons  | addons-557401 addons disable      | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:07 UTC | 11 Aug 23 23:07 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-557401 addons disable      | addons-557401          | jenkins | v1.31.1 | 11 Aug 23 23:07 UTC | 11 Aug 23 23:07 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:01:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:01:52.755402    8208 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:01:52.755590    8208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:52.755619    8208 out.go:309] Setting ErrFile to fd 2...
	I0811 23:01:52.755639    8208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:52.755913    8208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:01:52.756382    8208 out.go:303] Setting JSON to false
	I0811 23:01:52.757140    8208 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2661,"bootTime":1691792252,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:01:52.757240    8208 start.go:138] virtualization:  
	I0811 23:01:52.760189    8208 out.go:177] * [addons-557401] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:01:52.762632    8208 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:01:52.764649    8208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:01:52.762783    8208 notify.go:220] Checking for updates...
	I0811 23:01:52.768918    8208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:01:52.770587    8208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:01:52.772202    8208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:01:52.774015    8208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:01:52.776234    8208 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:01:52.801015    8208 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:01:52.801180    8208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:52.894403    8208 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:52.884436308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:52.894507    8208 docker.go:294] overlay module found
	I0811 23:01:52.897665    8208 out.go:177] * Using the docker driver based on user configuration
	I0811 23:01:52.899253    8208 start.go:298] selected driver: docker
	I0811 23:01:52.899271    8208 start.go:901] validating driver "docker" against <nil>
	I0811 23:01:52.899287    8208 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:01:52.899899    8208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:52.969438    8208 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:52.959364912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:52.969592    8208 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:01:52.969807    8208 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 23:01:52.971658    8208 out.go:177] * Using Docker driver with root privileges
	I0811 23:01:52.973199    8208 cni.go:84] Creating CNI manager for ""
	I0811 23:01:52.973220    8208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:01:52.973229    8208 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:01:52.973239    8208 start_flags.go:319] config:
	{Name:addons-557401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-557401 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:52.976137    8208 out.go:177] * Starting control plane node addons-557401 in cluster addons-557401
	I0811 23:01:52.977862    8208 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:01:52.979396    8208 out.go:177] * Pulling base image ...
	I0811 23:01:52.980922    8208 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:01:52.980970    8208 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:01:52.980982    8208 cache.go:57] Caching tarball of preloaded images
	I0811 23:01:52.981014    8208 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:01:52.981064    8208 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:01:52.981075    8208 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:01:52.981514    8208 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/config.json ...
	I0811 23:01:52.981546    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/config.json: {Name:mk3a7ed337a83ebae9e50d40bb4342e9d850da5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:01:53.003480    8208 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0811 23:01:53.003589    8208 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0811 23:01:53.003607    8208 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0811 23:01:53.003612    8208 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0811 23:01:53.003619    8208 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0811 23:01:53.003624    8208 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 from local cache
	I0811 23:02:09.044532    8208 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 from cached tarball
	I0811 23:02:09.044569    8208 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:02:09.044627    8208 start.go:365] acquiring machines lock for addons-557401: {Name:mk3935544e0cb0fe5aed6d416ead449edd6a098c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:02:09.044746    8208 start.go:369] acquired machines lock for "addons-557401" in 96.969µs
	I0811 23:02:09.044788    8208 start.go:93] Provisioning new machine with config: &{Name:addons-557401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-557401 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:02:09.044924    8208 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:02:09.046920    8208 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0811 23:02:09.047176    8208 start.go:159] libmachine.API.Create for "addons-557401" (driver="docker")
	I0811 23:02:09.047206    8208 client.go:168] LocalClient.Create starting
	I0811 23:02:09.047350    8208 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:02:09.502839    8208 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:02:09.790567    8208 cli_runner.go:164] Run: docker network inspect addons-557401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:02:09.808143    8208 cli_runner.go:211] docker network inspect addons-557401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:02:09.808235    8208 network_create.go:281] running [docker network inspect addons-557401] to gather additional debugging logs...
	I0811 23:02:09.808255    8208 cli_runner.go:164] Run: docker network inspect addons-557401
	W0811 23:02:09.827394    8208 cli_runner.go:211] docker network inspect addons-557401 returned with exit code 1
	I0811 23:02:09.827423    8208 network_create.go:284] error running [docker network inspect addons-557401]: docker network inspect addons-557401: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-557401 not found
	I0811 23:02:09.827436    8208 network_create.go:286] output of [docker network inspect addons-557401]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-557401 not found
	
	** /stderr **
	I0811 23:02:09.827491    8208 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:02:09.846180    8208 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40010d7ec0}
	I0811 23:02:09.846220    8208 network_create.go:123] attempt to create docker network addons-557401 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 23:02:09.846280    8208 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-557401 addons-557401
	I0811 23:02:09.915898    8208 network_create.go:107] docker network addons-557401 192.168.49.0/24 created
	I0811 23:02:09.915926    8208 kic.go:117] calculated static IP "192.168.49.2" for the "addons-557401" container
	I0811 23:02:09.916008    8208 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:02:09.932280    8208 cli_runner.go:164] Run: docker volume create addons-557401 --label name.minikube.sigs.k8s.io=addons-557401 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:02:09.950867    8208 oci.go:103] Successfully created a docker volume addons-557401
	I0811 23:02:09.950960    8208 cli_runner.go:164] Run: docker run --rm --name addons-557401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557401 --entrypoint /usr/bin/test -v addons-557401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:02:11.860563    8208 cli_runner.go:217] Completed: docker run --rm --name addons-557401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557401 --entrypoint /usr/bin/test -v addons-557401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib: (1.909561271s)
	I0811 23:02:11.860589    8208 oci.go:107] Successfully prepared a docker volume addons-557401
	I0811 23:02:11.860621    8208 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:02:11.860639    8208 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:02:11.860736    8208 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-557401:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:02:15.968871    8208 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-557401:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.108096761s)
	I0811 23:02:15.968915    8208 kic.go:199] duration metric: took 4.108272 seconds to extract preloaded images to volume
	W0811 23:02:15.969045    8208 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:02:15.969206    8208 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:02:16.043852    8208 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-557401 --name addons-557401 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-557401 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-557401 --network addons-557401 --ip 192.168.49.2 --volume addons-557401:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:02:16.409848    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Running}}
	I0811 23:02:16.435454    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:16.458189    8208 cli_runner.go:164] Run: docker exec addons-557401 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:02:16.528586    8208 oci.go:144] the created container "addons-557401" has a running status.
	I0811 23:02:16.528615    8208 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa...
	I0811 23:02:17.608927    8208 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:02:17.634867    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:17.659846    8208 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:02:17.659863    8208 kic_runner.go:114] Args: [docker exec --privileged addons-557401 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:02:17.747594    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:17.765436    8208 machine.go:88] provisioning docker machine ...
	I0811 23:02:17.765464    8208 ubuntu.go:169] provisioning hostname "addons-557401"
	I0811 23:02:17.765531    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:17.783831    8208 main.go:141] libmachine: Using SSH client type: native
	I0811 23:02:17.784284    8208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0811 23:02:17.784298    8208 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-557401 && echo "addons-557401" | sudo tee /etc/hostname
	I0811 23:02:17.947769    8208 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-557401
	
	I0811 23:02:17.947846    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:17.967618    8208 main.go:141] libmachine: Using SSH client type: native
	I0811 23:02:17.968109    8208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0811 23:02:17.968132    8208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-557401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-557401/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-557401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:02:18.114368    8208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:02:18.114395    8208 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:02:18.114422    8208 ubuntu.go:177] setting up certificates
	I0811 23:02:18.114441    8208 provision.go:83] configureAuth start
	I0811 23:02:18.114528    8208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557401
	I0811 23:02:18.139265    8208 provision.go:138] copyHostCerts
	I0811 23:02:18.139346    8208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:02:18.139474    8208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:02:18.139536    8208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:02:18.139585    8208 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.addons-557401 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-557401]
	I0811 23:02:18.789234    8208 provision.go:172] copyRemoteCerts
	I0811 23:02:18.789301    8208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:02:18.789346    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:18.808315    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:18.911512    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:02:18.939465    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:02:18.971557    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0811 23:02:19.000175    8208 provision.go:86] duration metric: configureAuth took 885.7183ms
	I0811 23:02:19.000199    8208 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:02:19.000394    8208 config.go:182] Loaded profile config "addons-557401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:02:19.000508    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:19.019490    8208 main.go:141] libmachine: Using SSH client type: native
	I0811 23:02:19.019929    8208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0811 23:02:19.019946    8208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:02:19.289849    8208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:02:19.289876    8208 machine.go:91] provisioned docker machine in 1.52442308s
	I0811 23:02:19.289886    8208 client.go:171] LocalClient.Create took 10.242672237s
	I0811 23:02:19.289899    8208 start.go:167] duration metric: libmachine.API.Create for "addons-557401" took 10.242723781s
	I0811 23:02:19.289906    8208 start.go:300] post-start starting for "addons-557401" (driver="docker")
	I0811 23:02:19.289915    8208 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:02:19.289977    8208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:02:19.290030    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:19.311183    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:19.416541    8208 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:02:19.420795    8208 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:02:19.420832    8208 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:02:19.420845    8208 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:02:19.420852    8208 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:02:19.420862    8208 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:02:19.420934    8208 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:02:19.420965    8208 start.go:303] post-start completed in 131.053353ms
	I0811 23:02:19.421344    8208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557401
	I0811 23:02:19.439650    8208 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/config.json ...
	I0811 23:02:19.439937    8208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:02:19.439985    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:19.458683    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:19.559432    8208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:02:19.565449    8208 start.go:128] duration metric: createHost completed in 10.520511041s
	I0811 23:02:19.565476    8208 start.go:83] releasing machines lock for "addons-557401", held for 10.520714875s
	I0811 23:02:19.565550    8208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-557401
	I0811 23:02:19.584401    8208 ssh_runner.go:195] Run: cat /version.json
	I0811 23:02:19.584457    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:19.584711    8208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:02:19.584766    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:19.613332    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:19.618386    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:19.713837    8208 ssh_runner.go:195] Run: systemctl --version
	I0811 23:02:19.856369    8208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:02:20.000741    8208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:02:20.016203    8208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:02:20.044925    8208 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:02:20.045047    8208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:02:20.085025    8208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:02:20.085051    8208 start.go:466] detecting cgroup driver to use...
	I0811 23:02:20.085133    8208 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:02:20.085203    8208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:02:20.103658    8208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:02:20.118043    8208 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:02:20.118143    8208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:02:20.135047    8208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:02:20.152319    8208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:02:20.244665    8208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:02:20.337606    8208 docker.go:212] disabling docker service ...
	I0811 23:02:20.337714    8208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:02:20.360024    8208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:02:20.373741    8208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:02:20.461318    8208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:02:20.553686    8208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:02:20.566663    8208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:02:20.585927    8208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0811 23:02:20.585996    8208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:02:20.598210    8208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0811 23:02:20.598319    8208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:02:20.611899    8208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:02:20.624521    8208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:02:20.636786    8208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:02:20.648172    8208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:02:20.658085    8208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:02:20.668069    8208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:02:20.756256    8208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:02:20.888686    8208 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:02:20.888830    8208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:02:20.893707    8208 start.go:534] Will wait 60s for crictl version
	I0811 23:02:20.893809    8208 ssh_runner.go:195] Run: which crictl
	I0811 23:02:20.898122    8208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:02:20.943085    8208 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:02:20.943253    8208 ssh_runner.go:195] Run: crio --version
	I0811 23:02:20.986467    8208 ssh_runner.go:195] Run: crio --version
	I0811 23:02:21.040500    8208 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0811 23:02:21.042605    8208 cli_runner.go:164] Run: docker network inspect addons-557401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:02:21.064822    8208 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 23:02:21.069562    8208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:02:21.083220    8208 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:02:21.083287    8208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:02:21.148039    8208 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:02:21.148063    8208 crio.go:415] Images already preloaded, skipping extraction
	I0811 23:02:21.148138    8208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:02:21.188062    8208 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:02:21.188084    8208 cache_images.go:84] Images are preloaded, skipping loading
	I0811 23:02:21.188162    8208 ssh_runner.go:195] Run: crio config
	I0811 23:02:21.253250    8208 cni.go:84] Creating CNI manager for ""
	I0811 23:02:21.253271    8208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:02:21.253323    8208 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:02:21.253352    8208 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-557401 NodeName:addons-557401 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:02:21.253550    8208 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-557401"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:02:21.253634    8208 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-557401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-557401 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:02:21.253728    8208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:02:21.266110    8208 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:02:21.266210    8208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:02:21.276986    8208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0811 23:02:21.298207    8208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:02:21.318969    8208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0811 23:02:21.339574    8208 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 23:02:21.343952    8208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:02:21.357560    8208 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401 for IP: 192.168.49.2
	I0811 23:02:21.357590    8208 certs.go:190] acquiring lock for shared ca certs: {Name:mk92ef0e52f7a4bf6e55e35fe7431dc846a67439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:21.357716    8208 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key
	I0811 23:02:22.269384    8208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt ...
	I0811 23:02:22.269415    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt: {Name:mk8e1d54ec261f1e3ee8bd77c343515005fba33f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:22.269625    8208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key ...
	I0811 23:02:22.269638    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key: {Name:mk7d8d7cec3d8600403a58c90c0d4926dcefb8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:22.269724    8208 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key
	I0811 23:02:22.909719    8208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt ...
	I0811 23:02:22.909749    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt: {Name:mk2c6d4216665ddac5863ac11243451ddb0ab78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:22.909941    8208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key ...
	I0811 23:02:22.909953    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key: {Name:mk76b235ecbb2648abacc6d05d491526267b394b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:22.910066    8208 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.key
	I0811 23:02:22.910082    8208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt with IP's: []
	I0811 23:02:23.281630    8208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt ...
	I0811 23:02:23.281664    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: {Name:mkde4087d5333b6b8bc46a6e9c77e58faaf01335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:23.281854    8208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.key ...
	I0811 23:02:23.281865    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.key: {Name:mk816ba6696f049517c11fd9246a868bdeb36c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:23.281952    8208 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key.dd3b5fb2
	I0811 23:02:23.281972    8208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 23:02:23.581526    8208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt.dd3b5fb2 ...
	I0811 23:02:23.581555    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt.dd3b5fb2: {Name:mk25d094442765fd57974ba36760a4a13e5c8f18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:23.581738    8208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key.dd3b5fb2 ...
	I0811 23:02:23.581750    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key.dd3b5fb2: {Name:mk99c942782ea346e785f7c58350bd94e4f7c0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:23.581827    8208 certs.go:337] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt
	I0811 23:02:23.581900    8208 certs.go:341] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key
	I0811 23:02:23.581950    8208 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.key
	I0811 23:02:23.581969    8208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.crt with IP's: []
	I0811 23:02:24.070947    8208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.crt ...
	I0811 23:02:24.070980    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.crt: {Name:mk0e5146fce195a66057bd18aae625f2ea83cd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:24.071162    8208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.key ...
	I0811 23:02:24.071175    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.key: {Name:mkf9584f6941d50605bb440dc21c7a58eb063811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:24.071364    8208 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 23:02:24.071404    8208 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem (1082 bytes)
	I0811 23:02:24.071434    8208 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:02:24.071463    8208 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem (1675 bytes)
	I0811 23:02:24.072036    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:02:24.104760    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 23:02:24.134475    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:02:24.163675    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 23:02:24.193346    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:02:24.222213    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0811 23:02:24.251771    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:02:24.279318    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 23:02:24.308237    8208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:02:24.337662    8208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:02:24.358652    8208 ssh_runner.go:195] Run: openssl version
	I0811 23:02:24.365465    8208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:02:24.377040    8208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:02:24.381858    8208 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:02:24.381949    8208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:02:24.390489    8208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:02:24.402227    8208 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:02:24.406545    8208 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:02:24.406593    8208 kubeadm.go:404] StartCluster: {Name:addons-557401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-557401 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:02:24.406671    8208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0811 23:02:24.406724    8208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0811 23:02:24.454822    8208 cri.go:89] found id: ""
	I0811 23:02:24.454937    8208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:02:24.465865    8208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:02:24.476764    8208 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0811 23:02:24.476851    8208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:02:24.487767    8208 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:02:24.487811    8208 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 23:02:24.539876    8208 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0811 23:02:24.540216    8208 kubeadm.go:322] [preflight] Running pre-flight checks
	I0811 23:02:24.584636    8208 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0811 23:02:24.584704    8208 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0811 23:02:24.584742    8208 kubeadm.go:322] OS: Linux
	I0811 23:02:24.584787    8208 kubeadm.go:322] CGROUPS_CPU: enabled
	I0811 23:02:24.584835    8208 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0811 23:02:24.584883    8208 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0811 23:02:24.584930    8208 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0811 23:02:24.584979    8208 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0811 23:02:24.585027    8208 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0811 23:02:24.585073    8208 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0811 23:02:24.585135    8208 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0811 23:02:24.585190    8208 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0811 23:02:24.672170    8208 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 23:02:24.672330    8208 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 23:02:24.672455    8208 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 23:02:24.929470    8208 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:02:24.932166    8208 out.go:204]   - Generating certificates and keys ...
	I0811 23:02:24.932295    8208 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0811 23:02:24.932379    8208 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0811 23:02:25.407209    8208 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 23:02:25.736212    8208 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0811 23:02:26.044375    8208 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0811 23:02:27.090384    8208 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0811 23:02:27.536968    8208 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0811 23:02:27.537316    8208 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-557401 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 23:02:28.028893    8208 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0811 23:02:28.029297    8208 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-557401 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 23:02:28.427151    8208 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 23:02:29.000299    8208 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 23:02:29.177645    8208 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0811 23:02:29.177957    8208 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:02:29.584566    8208 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:02:30.716665    8208 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:02:31.256241    8208 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:02:32.249369    8208 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:02:32.259931    8208 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:02:32.261458    8208 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:02:32.261506    8208 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0811 23:02:32.373522    8208 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:02:32.375709    8208 out.go:204]   - Booting up control plane ...
	I0811 23:02:32.375822    8208 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:02:32.375904    8208 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:02:32.376206    8208 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:02:32.377607    8208 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:02:32.380550    8208 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 23:02:39.383915    8208 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002971 seconds
	I0811 23:02:39.384033    8208 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 23:02:39.402640    8208 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 23:02:39.928746    8208 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0811 23:02:39.928927    8208 kubeadm.go:322] [mark-control-plane] Marking the node addons-557401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0811 23:02:40.440694    8208 kubeadm.go:322] [bootstrap-token] Using token: 2xvdq6.pugjxj2wazfmbd1f
	I0811 23:02:40.442462    8208 out.go:204]   - Configuring RBAC rules ...
	I0811 23:02:40.442597    8208 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 23:02:40.448356    8208 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 23:02:40.457757    8208 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 23:02:40.461698    8208 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 23:02:40.465533    8208 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 23:02:40.469480    8208 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 23:02:40.487411    8208 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 23:02:40.753409    8208 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0811 23:02:40.856039    8208 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0811 23:02:40.857422    8208 kubeadm.go:322] 
	I0811 23:02:40.857489    8208 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0811 23:02:40.857495    8208 kubeadm.go:322] 
	I0811 23:02:40.857567    8208 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0811 23:02:40.857571    8208 kubeadm.go:322] 
	I0811 23:02:40.857596    8208 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0811 23:02:40.857651    8208 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 23:02:40.857698    8208 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 23:02:40.857702    8208 kubeadm.go:322] 
	I0811 23:02:40.857753    8208 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0811 23:02:40.857770    8208 kubeadm.go:322] 
	I0811 23:02:40.857816    8208 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0811 23:02:40.857820    8208 kubeadm.go:322] 
	I0811 23:02:40.857869    8208 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0811 23:02:40.857939    8208 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 23:02:40.858003    8208 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 23:02:40.858007    8208 kubeadm.go:322] 
	I0811 23:02:40.858085    8208 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0811 23:02:40.858157    8208 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0811 23:02:40.858161    8208 kubeadm.go:322] 
	I0811 23:02:40.858239    8208 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2xvdq6.pugjxj2wazfmbd1f \
	I0811 23:02:40.858338    8208 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 \
	I0811 23:02:40.858358    8208 kubeadm.go:322] 	--control-plane 
	I0811 23:02:40.858362    8208 kubeadm.go:322] 
	I0811 23:02:40.858650    8208 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0811 23:02:40.858660    8208 kubeadm.go:322] 
	I0811 23:02:40.858736    8208 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2xvdq6.pugjxj2wazfmbd1f \
	I0811 23:02:40.858832    8208 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 
	I0811 23:02:40.862079    8208 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0811 23:02:40.862187    8208 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:02:40.862201    8208 cni.go:84] Creating CNI manager for ""
	I0811 23:02:40.862210    8208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:02:40.864158    8208 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:02:40.865894    8208 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:02:40.871057    8208 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:02:40.871113    8208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:02:40.912648    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:02:41.864630    8208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:02:41.864758    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:41.864821    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5 minikube.k8s.io/name=addons-557401 minikube.k8s.io/updated_at=2023_08_11T23_02_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:41.884737    8208 ops.go:34] apiserver oom_adj: -16
	I0811 23:02:41.993558    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:42.126103    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:42.740222    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:43.239617    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:43.739647    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:44.239646    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:44.739597    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:45.239577    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:45.739799    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:46.240176    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:46.739630    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:47.239520    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:47.739816    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:48.240533    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:48.740580    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:49.239584    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:49.740557    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:50.240398    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:50.740473    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:51.239683    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:51.740126    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:52.240003    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:52.739692    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:53.240104    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:53.739793    8208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:02:53.843873    8208 kubeadm.go:1081] duration metric: took 11.979168731s to wait for elevateKubeSystemPrivileges.
	I0811 23:02:53.843909    8208 kubeadm.go:406] StartCluster complete in 29.437318923s
	I0811 23:02:53.843929    8208 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:53.844050    8208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:02:53.844459    8208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:02:53.844642    8208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:02:53.844932    8208 config.go:182] Loaded profile config "addons-557401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:02:53.844965    8208 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0811 23:02:53.845031    8208 addons.go:69] Setting volumesnapshots=true in profile "addons-557401"
	I0811 23:02:53.845043    8208 addons.go:231] Setting addon volumesnapshots=true in "addons-557401"
	I0811 23:02:53.845078    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.845667    8208 addons.go:69] Setting cloud-spanner=true in profile "addons-557401"
	I0811 23:02:53.845689    8208 addons.go:231] Setting addon cloud-spanner=true in "addons-557401"
	I0811 23:02:53.845723    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.846320    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.848123    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.850009    8208 addons.go:69] Setting metrics-server=true in profile "addons-557401"
	I0811 23:02:53.850036    8208 addons.go:231] Setting addon metrics-server=true in "addons-557401"
	I0811 23:02:53.850073    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.850581    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.850864    8208 addons.go:69] Setting registry=true in profile "addons-557401"
	I0811 23:02:53.851045    8208 addons.go:231] Setting addon registry=true in "addons-557401"
	I0811 23:02:53.851130    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.851814    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.852590    8208 addons.go:69] Setting storage-provisioner=true in profile "addons-557401"
	I0811 23:02:53.852651    8208 addons.go:231] Setting addon storage-provisioner=true in "addons-557401"
	I0811 23:02:53.852722    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.853361    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.873331    8208 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-557401"
	I0811 23:02:53.873439    8208 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-557401"
	I0811 23:02:53.873518    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.874003    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.887701    8208 addons.go:69] Setting default-storageclass=true in profile "addons-557401"
	I0811 23:02:53.887729    8208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-557401"
	I0811 23:02:53.888048    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.899468    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0811 23:02:53.901331    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0811 23:02:53.901351    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0811 23:02:53.901417    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:53.899728    8208 addons.go:69] Setting gcp-auth=true in profile "addons-557401"
	I0811 23:02:53.927983    8208 mustload.go:65] Loading cluster: addons-557401
	I0811 23:02:53.928195    8208 config.go:182] Loaded profile config "addons-557401": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:02:53.928445    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.899742    8208 addons.go:69] Setting ingress=true in profile "addons-557401"
	I0811 23:02:53.945361    8208 addons.go:231] Setting addon ingress=true in "addons-557401"
	I0811 23:02:53.945435    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.945888    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.899756    8208 addons.go:69] Setting ingress-dns=true in profile "addons-557401"
	I0811 23:02:53.960448    8208 addons.go:231] Setting addon ingress-dns=true in "addons-557401"
	I0811 23:02:53.960503    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.899764    8208 addons.go:69] Setting inspektor-gadget=true in profile "addons-557401"
	I0811 23:02:53.960740    8208 addons.go:231] Setting addon inspektor-gadget=true in "addons-557401"
	I0811 23:02:53.960782    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:53.963401    8208 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0811 23:02:53.962674    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.965284    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:53.974985    8208 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0811 23:02:53.975005    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0811 23:02:53.975062    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:53.994782    8208 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0811 23:02:53.998311    8208 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0811 23:02:53.998333    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0811 23:02:53.998408    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.055447    8208 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-557401" context rescaled to 1 replicas
	I0811 23:02:54.055489    8208 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:02:54.064707    8208 out.go:177] * Verifying Kubernetes components...
	I0811 23:02:54.067243    8208 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0811 23:02:54.070572    8208 out.go:177]   - Using image docker.io/registry:2.8.1
	I0811 23:02:54.067514    8208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:02:54.075992    8208 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0811 23:02:54.076006    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0811 23:02:54.076089    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.097490    8208 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0811 23:02:54.101305    8208 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0811 23:02:54.101338    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0811 23:02:54.101413    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.110099    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.125488    8208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 23:02:54.136384    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.147350    8208 node_ready.go:35] waiting up to 6m0s for node "addons-557401" to be "Ready" ...
	I0811 23:02:54.184094    8208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:02:54.186340    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0811 23:02:54.190487    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0811 23:02:54.186574    8208 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:02:54.195923    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 23:02:54.196001    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.202946    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0811 23:02:54.207986    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0811 23:02:54.210255    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0811 23:02:54.209906    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.219662    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0811 23:02:54.221466    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0811 23:02:54.223132    8208 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0811 23:02:54.224920    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0811 23:02:54.224938    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0811 23:02:54.225007    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.239418    8208 addons.go:231] Setting addon default-storageclass=true in "addons-557401"
	I0811 23:02:54.239469    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:54.239898    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:02:54.259258    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.270787    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:02:54.300088    8208 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0811 23:02:54.301759    8208 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0811 23:02:54.301815    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0811 23:02:54.301910    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.319980    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.342344    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.349307    8208 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0811 23:02:54.351733    8208 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0811 23:02:54.359282    8208 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0811 23:02:54.361430    8208 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0811 23:02:54.361453    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0811 23:02:54.361516    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.395045    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.409612    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.430790    8208 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 23:02:54.430808    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 23:02:54.430867    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:02:54.450749    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.475024    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:02:54.550817    8208 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0811 23:02:54.550885    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0811 23:02:54.632773    8208 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0811 23:02:54.632843    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0811 23:02:54.634728    8208 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0811 23:02:54.634791    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0811 23:02:54.675984    8208 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0811 23:02:54.676052    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0811 23:02:54.695462    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0811 23:02:54.751144    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0811 23:02:54.751206    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0811 23:02:54.751517    8208 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0811 23:02:54.751553    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0811 23:02:54.779636    8208 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0811 23:02:54.779702    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0811 23:02:54.848692    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0811 23:02:54.848712    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0811 23:02:54.851681    8208 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0811 23:02:54.851700    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0811 23:02:54.852399    8208 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 23:02:54.852449    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0811 23:02:54.878612    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:02:54.895864    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0811 23:02:54.897589    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0811 23:02:54.900809    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 23:02:54.905754    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0811 23:02:54.964527    8208 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0811 23:02:54.964552    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0811 23:02:54.967592    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0811 23:02:54.967616    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0811 23:02:54.990268    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0811 23:02:54.994925    8208 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0811 23:02:54.994951    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0811 23:02:55.090264    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0811 23:02:55.090292    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0811 23:02:55.149166    8208 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0811 23:02:55.149187    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0811 23:02:55.179838    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0811 23:02:55.179864    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0811 23:02:55.268286    8208 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0811 23:02:55.268313    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0811 23:02:55.312876    8208 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0811 23:02:55.312901    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0811 23:02:55.343216    8208 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 23:02:55.343240    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0811 23:02:55.552123    8208 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0811 23:02:55.552147    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0811 23:02:55.571655    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0811 23:02:55.571680    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0811 23:02:55.593455    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 23:02:55.664116    8208 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0811 23:02:55.664142    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0811 23:02:55.678729    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0811 23:02:55.678754    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0811 23:02:55.731404    8208 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0811 23:02:55.731429    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0811 23:02:55.742683    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0811 23:02:55.742707    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0811 23:02:55.782301    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0811 23:02:55.829507    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0811 23:02:55.829539    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0811 23:02:55.998844    8208 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 23:02:55.998870    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0811 23:02:56.161947    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0811 23:02:56.514351    8208 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.388832208s)
	I0811 23:02:56.514380    8208 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0811 23:02:56.723690    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:02:58.222057    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.526562669s)
	I0811 23:02:58.825006    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:02:58.946079    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.067426489s)
	I0811 23:02:58.946183    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.050293207s)
	I0811 23:02:58.946200    8208 addons.go:467] Verifying addon registry=true in "addons-557401"
	I0811 23:02:58.948083    8208 out.go:177] * Verifying registry addon...
	I0811 23:02:58.950623    8208 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0811 23:02:58.978587    8208 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0811 23:02:58.978686    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:02:58.985953    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:02:59.506740    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.609117785s)
	I0811 23:02:59.506821    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.605990583s)
	I0811 23:02:59.506857    8208 addons.go:467] Verifying addon ingress=true in "addons-557401"
	I0811 23:02:59.508902    8208 out.go:177] * Verifying ingress addon...
	I0811 23:02:59.506968    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.601190486s)
	I0811 23:02:59.507059    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.516762328s)
	I0811 23:02:59.507153    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.913664011s)
	I0811 23:02:59.507230    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.724897357s)
	I0811 23:02:59.510694    8208 addons.go:467] Verifying addon metrics-server=true in "addons-557401"
	W0811 23:02:59.510759    8208 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0811 23:02:59.510790    8208 retry.go:31] will retry after 295.108827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0811 23:02:59.512315    8208 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0811 23:02:59.518857    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:02:59.524316    8208 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0811 23:02:59.524341    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:02:59.529057    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:02:59.806658    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0811 23:02:59.926250    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.764243775s)
	I0811 23:02:59.926284    8208 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-557401"
	I0811 23:02:59.928371    8208 out.go:177] * Verifying csi-hostpath-driver addon...
	I0811 23:02:59.930963    8208 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0811 23:02:59.962772    8208 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0811 23:02:59.962798    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:02:59.970149    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:02:59.993160    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:00.092964    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:00.506533    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:00.520539    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:00.570065    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:00.988808    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:01.016587    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:01.060367    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:01.272737    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:01.485961    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:01.507961    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:01.543541    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:01.636310    8208 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0811 23:03:01.636382    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:03:01.677136    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:03:01.739813    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.933108549s)
	I0811 23:03:01.889073    8208 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0811 23:03:01.960035    8208 addons.go:231] Setting addon gcp-auth=true in "addons-557401"
	I0811 23:03:01.960085    8208 host.go:66] Checking if "addons-557401" exists ...
	I0811 23:03:01.960599    8208 cli_runner.go:164] Run: docker container inspect addons-557401 --format={{.State.Status}}
	I0811 23:03:01.976398    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:01.993255    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:02.004898    8208 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0811 23:03:02.004965    8208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-557401
	I0811 23:03:02.034419    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:02.049286    8208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/addons-557401/id_rsa Username:docker}
	I0811 23:03:02.212326    8208 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0811 23:03:02.213949    8208 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0811 23:03:02.215759    8208 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0811 23:03:02.215782    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0811 23:03:02.289641    8208 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0811 23:03:02.289667    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0811 23:03:02.321990    8208 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 23:03:02.322016    8208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0811 23:03:02.347005    8208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0811 23:03:02.482057    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:02.493376    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:02.534479    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:02.974959    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:02.991257    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:03.042183    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:03.337627    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:03.364182    8208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.017134179s)
	I0811 23:03:03.366004    8208 addons.go:467] Verifying addon gcp-auth=true in "addons-557401"
	I0811 23:03:03.368080    8208 out.go:177] * Verifying gcp-auth addon...
	I0811 23:03:03.370589    8208 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0811 23:03:03.420413    8208 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0811 23:03:03.420479    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:03.437710    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:03.474875    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:03.490636    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:03.535008    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:03.942183    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:03.975053    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:03.993179    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:04.036169    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:04.443016    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:04.474903    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:04.490281    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:04.533683    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:04.942528    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:04.974936    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:04.992275    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:05.034131    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:05.442770    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:05.477252    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:05.490896    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:05.534590    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:05.772530    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:05.941414    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:05.978855    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:05.997989    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:06.035469    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:06.443045    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:06.475582    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:06.491854    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:06.534757    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:06.942298    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:06.975153    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:06.991215    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:07.034583    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:07.442690    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:07.475281    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:07.491022    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:07.534398    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:07.942948    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:07.975913    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:07.990890    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:08.034447    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:08.274655    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:08.443396    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:08.475743    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:08.492670    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:08.534741    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:08.941432    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:08.975124    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:08.990823    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:09.035899    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:09.442846    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:09.476527    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:09.491017    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:09.538177    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:09.942437    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:09.975327    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:09.990998    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:10.043342    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:10.442849    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:10.483032    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:10.490689    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:10.533740    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:10.771393    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:10.944368    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:10.975204    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:10.990260    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:11.034434    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:11.442478    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:11.476275    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:11.491408    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:11.533780    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:11.941717    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:11.974603    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:11.989940    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:12.033976    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:12.442727    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:12.475929    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:12.490287    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:12.533980    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:12.771863    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:12.941931    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:12.974596    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:12.990199    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:13.033929    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:13.442314    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:13.475084    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:13.490381    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:13.534126    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:13.946332    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:13.975006    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:13.990102    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:14.034135    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:14.442492    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:14.476000    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:14.490424    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:14.534353    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:14.941273    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:14.975224    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:14.990799    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:15.034181    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:15.271694    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:15.441686    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:15.475267    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:15.490540    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:15.535140    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:15.943404    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:15.974613    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:15.990669    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:16.033495    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:16.442354    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:16.475289    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:16.491156    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:16.533945    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:16.941267    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:16.975090    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:16.990524    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:17.033294    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:17.441971    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:17.475485    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:17.490000    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:17.533866    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:17.771505    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:17.942161    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:17.975397    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:17.990791    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:18.034195    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:18.441384    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:18.474880    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:18.490758    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:18.533458    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:18.941335    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:18.975280    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:18.990929    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:19.034059    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:19.442771    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:19.476994    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:19.492416    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:19.536936    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:19.942163    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:19.974576    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:19.990397    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:20.033932    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:20.271602    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:20.442054    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:20.475345    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:20.491211    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:20.533463    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:20.941538    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:20.974493    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:20.990800    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:21.033878    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:21.442193    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:21.475483    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:21.490552    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:21.533418    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:21.942112    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:21.974937    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:21.990700    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:22.033644    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:22.441904    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:22.475061    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:22.490873    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:22.533618    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:22.771369    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:22.941223    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:22.974887    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:22.991814    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:23.040058    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:23.441886    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:23.479396    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:23.490512    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:23.533649    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:23.941536    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:23.974930    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:23.989801    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:24.033920    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:24.442106    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:24.475574    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:24.491252    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:24.534110    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:24.771783    8208 node_ready.go:58] node "addons-557401" has status "Ready":"False"
	I0811 23:03:24.941686    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:24.975113    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:24.989936    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:25.034512    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:25.441974    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:25.474878    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:25.490725    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:25.533940    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:25.940959    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:25.974874    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:25.990541    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:26.033992    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:26.443317    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:26.474887    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:26.490477    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:26.533612    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:26.941820    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:26.975769    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:26.998051    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:27.053628    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:27.298363    8208 node_ready.go:49] node "addons-557401" has status "Ready":"True"
	I0811 23:03:27.298388    8208 node_ready.go:38] duration metric: took 33.151012102s waiting for node "addons-557401" to be "Ready" ...
	I0811 23:03:27.298398    8208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:03:27.352533    8208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ftx4z" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:27.448160    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:27.482728    8208 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0811 23:03:27.482754    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:27.512341    8208 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0811 23:03:27.512368    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:27.571223    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:27.949504    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:27.977292    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:27.990868    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:28.034476    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:28.443918    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:28.492839    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:28.493728    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:28.536814    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:28.895670    8208 pod_ready.go:92] pod "coredns-5d78c9869d-ftx4z" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:28.895695    8208 pod_ready.go:81] duration metric: took 1.543128794s waiting for pod "coredns-5d78c9869d-ftx4z" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.895718    8208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.901362    8208 pod_ready.go:92] pod "etcd-addons-557401" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:28.901385    8208 pod_ready.go:81] duration metric: took 5.659651ms waiting for pod "etcd-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.901399    8208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.907271    8208 pod_ready.go:92] pod "kube-apiserver-addons-557401" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:28.907296    8208 pod_ready.go:81] duration metric: took 5.888741ms waiting for pod "kube-apiserver-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.907308    8208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.913393    8208 pod_ready.go:92] pod "kube-controller-manager-addons-557401" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:28.913420    8208 pod_ready.go:81] duration metric: took 6.103224ms waiting for pod "kube-controller-manager-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.913434    8208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmkc6" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:28.941590    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:28.985210    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:28.992616    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:29.035342    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:29.272498    8208 pod_ready.go:92] pod "kube-proxy-cmkc6" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:29.272524    8208 pod_ready.go:81] duration metric: took 359.060462ms waiting for pod "kube-proxy-cmkc6" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:29.272535    8208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:29.443670    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:29.480256    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:29.504690    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:29.534783    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:29.672411    8208 pod_ready.go:92] pod "kube-scheduler-addons-557401" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:29.672433    8208 pod_ready.go:81] duration metric: took 399.890956ms waiting for pod "kube-scheduler-addons-557401" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:29.672444    8208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7746886d4f-lfs9m" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:29.941599    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:29.977468    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:29.993177    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:30.038695    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:30.442373    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:30.481910    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:30.492098    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:30.541635    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:30.942531    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:30.982655    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:30.998314    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:31.039453    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:31.444013    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:31.480833    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:31.492702    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:31.542520    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:31.947056    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:31.976530    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:31.999957    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:32.000563    8208 pod_ready.go:102] pod "metrics-server-7746886d4f-lfs9m" in "kube-system" namespace has status "Ready":"False"
	I0811 23:03:32.041820    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:32.444373    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:32.487882    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:32.494343    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:32.534458    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:32.941501    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:32.975730    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:32.983371    8208 pod_ready.go:92] pod "metrics-server-7746886d4f-lfs9m" in "kube-system" namespace has status "Ready":"True"
	I0811 23:03:32.983438    8208 pod_ready.go:81] duration metric: took 3.310985923s waiting for pod "metrics-server-7746886d4f-lfs9m" in "kube-system" namespace to be "Ready" ...
	I0811 23:03:32.983465    8208 pod_ready.go:38] duration metric: took 5.685055393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:03:32.983515    8208 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:03:32.983577    8208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:03:32.993222    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:33.000550    8208 api_server.go:72] duration metric: took 38.94503223s to wait for apiserver process to appear ...
	I0811 23:03:33.000579    8208 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:03:33.000597    8208 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 23:03:33.015100    8208 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 23:03:33.016568    8208 api_server.go:141] control plane version: v1.27.4
	I0811 23:03:33.016598    8208 api_server.go:131] duration metric: took 16.012265ms to wait for apiserver health ...
	I0811 23:03:33.016607    8208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:03:33.035143    8208 system_pods.go:59] 17 kube-system pods found
	I0811 23:03:33.035329    8208 system_pods.go:61] "coredns-5d78c9869d-ftx4z" [d290aa01-95ab-42c3-8db0-2d4d39d262ac] Running
	I0811 23:03:33.035342    8208 system_pods.go:61] "csi-hostpath-attacher-0" [5dd078a7-d01d-4ddc-a722-ef67231b6804] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0811 23:03:33.035351    8208 system_pods.go:61] "csi-hostpath-resizer-0" [3865ef81-180b-4f3c-a1c4-818c5b086c50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0811 23:03:33.035362    8208 system_pods.go:61] "csi-hostpathplugin-hfjmx" [6f4c8997-a159-47ac-b6d0-e00a2c50fdb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0811 23:03:33.035379    8208 system_pods.go:61] "etcd-addons-557401" [927be848-ce79-44cd-a359-e339ed70bcb2] Running
	I0811 23:03:33.035386    8208 system_pods.go:61] "kindnet-2c4dk" [cfcb6b34-68c8-4488-b1c1-d8b10804b397] Running
	I0811 23:03:33.035391    8208 system_pods.go:61] "kube-apiserver-addons-557401" [a827e88e-e1c3-4f92-b64d-102db2be57d9] Running
	I0811 23:03:33.035396    8208 system_pods.go:61] "kube-controller-manager-addons-557401" [fee33f82-1c37-4edb-9647-19f4f8ac3038] Running
	I0811 23:03:33.035404    8208 system_pods.go:61] "kube-ingress-dns-minikube" [98138bc5-f73b-49bc-967d-a1b554ab9ae6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0811 23:03:33.035409    8208 system_pods.go:61] "kube-proxy-cmkc6" [dbbfb4ce-ada4-430b-b25a-64a14495e3dc] Running
	I0811 23:03:33.035414    8208 system_pods.go:61] "kube-scheduler-addons-557401" [93c1adcc-1d58-470a-988e-b53fdaa835e2] Running
	I0811 23:03:33.035419    8208 system_pods.go:61] "metrics-server-7746886d4f-lfs9m" [43d07384-7646-4ba7-b848-8899ed88f301] Running
	I0811 23:03:33.035424    8208 system_pods.go:61] "registry-f97vk" [599382dd-a86d-4d20-b84c-0cd4defdc9a1] Running
	I0811 23:03:33.035431    8208 system_pods.go:61] "registry-proxy-djx95" [035ad142-8c49-40e5-8d68-7bba6b06c8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 23:03:33.035439    8208 system_pods.go:61] "snapshot-controller-75bbb956b9-p9vfs" [2f29941c-5218-4392-9c4c-cd6f08556d34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 23:03:33.035447    8208 system_pods.go:61] "snapshot-controller-75bbb956b9-t4hph" [10b5451b-d9c2-45dd-b4e1-d612288168cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 23:03:33.035454    8208 system_pods.go:61] "storage-provisioner" [3f03d08e-eb32-4727-868b-d347b85775bb] Running
	I0811 23:03:33.035459    8208 system_pods.go:74] duration metric: took 18.847001ms to wait for pod list to return data ...
	I0811 23:03:33.035468    8208 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:03:33.037538    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:33.039482    8208 default_sa.go:45] found service account: "default"
	I0811 23:03:33.039503    8208 default_sa.go:55] duration metric: took 4.030244ms for default service account to be created ...
	I0811 23:03:33.039513    8208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:03:33.080872    8208 system_pods.go:86] 17 kube-system pods found
	I0811 23:03:33.080913    8208 system_pods.go:89] "coredns-5d78c9869d-ftx4z" [d290aa01-95ab-42c3-8db0-2d4d39d262ac] Running
	I0811 23:03:33.080925    8208 system_pods.go:89] "csi-hostpath-attacher-0" [5dd078a7-d01d-4ddc-a722-ef67231b6804] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0811 23:03:33.080934    8208 system_pods.go:89] "csi-hostpath-resizer-0" [3865ef81-180b-4f3c-a1c4-818c5b086c50] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0811 23:03:33.080942    8208 system_pods.go:89] "csi-hostpathplugin-hfjmx" [6f4c8997-a159-47ac-b6d0-e00a2c50fdb5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0811 23:03:33.080948    8208 system_pods.go:89] "etcd-addons-557401" [927be848-ce79-44cd-a359-e339ed70bcb2] Running
	I0811 23:03:33.080953    8208 system_pods.go:89] "kindnet-2c4dk" [cfcb6b34-68c8-4488-b1c1-d8b10804b397] Running
	I0811 23:03:33.080960    8208 system_pods.go:89] "kube-apiserver-addons-557401" [a827e88e-e1c3-4f92-b64d-102db2be57d9] Running
	I0811 23:03:33.080966    8208 system_pods.go:89] "kube-controller-manager-addons-557401" [fee33f82-1c37-4edb-9647-19f4f8ac3038] Running
	I0811 23:03:33.080975    8208 system_pods.go:89] "kube-ingress-dns-minikube" [98138bc5-f73b-49bc-967d-a1b554ab9ae6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0811 23:03:33.080988    8208 system_pods.go:89] "kube-proxy-cmkc6" [dbbfb4ce-ada4-430b-b25a-64a14495e3dc] Running
	I0811 23:03:33.080994    8208 system_pods.go:89] "kube-scheduler-addons-557401" [93c1adcc-1d58-470a-988e-b53fdaa835e2] Running
	I0811 23:03:33.081002    8208 system_pods.go:89] "metrics-server-7746886d4f-lfs9m" [43d07384-7646-4ba7-b848-8899ed88f301] Running
	I0811 23:03:33.081008    8208 system_pods.go:89] "registry-f97vk" [599382dd-a86d-4d20-b84c-0cd4defdc9a1] Running
	I0811 23:03:33.081015    8208 system_pods.go:89] "registry-proxy-djx95" [035ad142-8c49-40e5-8d68-7bba6b06c8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0811 23:03:33.081026    8208 system_pods.go:89] "snapshot-controller-75bbb956b9-p9vfs" [2f29941c-5218-4392-9c4c-cd6f08556d34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 23:03:33.081038    8208 system_pods.go:89] "snapshot-controller-75bbb956b9-t4hph" [10b5451b-d9c2-45dd-b4e1-d612288168cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0811 23:03:33.081044    8208 system_pods.go:89] "storage-provisioner" [3f03d08e-eb32-4727-868b-d347b85775bb] Running
	I0811 23:03:33.081050    8208 system_pods.go:126] duration metric: took 41.532786ms to wait for k8s-apps to be running ...
	I0811 23:03:33.081061    8208 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:03:33.081130    8208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:03:33.095363    8208 system_svc.go:56] duration metric: took 14.293036ms WaitForService to wait for kubelet.
	I0811 23:03:33.095391    8208 kubeadm.go:581] duration metric: took 39.039877812s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:03:33.095416    8208 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:03:33.272199    8208 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:03:33.272229    8208 node_conditions.go:123] node cpu capacity is 2
	I0811 23:03:33.272263    8208 node_conditions.go:105] duration metric: took 176.821897ms to run NodePressure ...
	I0811 23:03:33.272284    8208 start.go:228] waiting for startup goroutines ...
	I0811 23:03:33.442324    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:33.477018    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:33.492033    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:33.534282    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:33.942312    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:33.977800    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:33.997221    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:34.035020    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:34.443138    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:34.480137    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:34.498710    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:34.540422    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:34.941388    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:34.976538    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:34.991829    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:35.041857    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:35.442387    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:35.475957    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:35.490962    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:35.534438    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:35.941860    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:35.978293    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:35.992180    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:36.036096    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:36.443841    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:36.478184    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:36.493352    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:36.536645    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:36.941699    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:36.976650    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:37.000466    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:37.068473    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:37.442698    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:37.477287    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:37.492733    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:37.534280    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:37.942948    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:37.977036    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:37.991773    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:38.034666    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:38.442598    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:38.475663    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:38.491292    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:38.534356    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:38.941474    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:38.975514    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:38.991436    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:39.034719    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:39.441938    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:39.479270    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:39.510297    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:39.536686    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:39.942601    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:39.976293    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:39.991773    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:40.044358    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:40.444072    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:40.477292    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:40.492842    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:40.538920    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:40.941864    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:40.976273    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:40.990655    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:41.033499    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:41.441769    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:41.477285    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:41.490733    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:41.533720    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:41.942346    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:41.979527    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:41.991807    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:42.034163    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:42.443452    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:42.478283    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:42.491537    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:42.536216    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:42.942535    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:42.976915    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:42.994893    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:43.034998    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:43.447853    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:43.475924    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:43.490621    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:43.534345    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:43.943064    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:43.975858    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:43.991885    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:44.034788    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:44.443848    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:44.483243    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:44.501164    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:44.535058    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:44.943188    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:44.976660    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:44.990813    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:45.036549    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:45.441849    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:45.476174    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:45.490697    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:45.533297    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:45.942788    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:45.977662    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:45.993850    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:46.045545    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:46.441792    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:46.476354    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:46.493107    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:46.535527    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:46.941343    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:46.976094    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:46.990776    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:47.033704    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:47.442507    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:47.476127    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:47.490994    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:47.534106    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:47.942272    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:47.977459    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:47.990509    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:48.034598    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:48.443669    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:48.477802    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:48.493481    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:48.534718    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:48.942900    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:48.978590    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:48.991310    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:49.046075    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:49.442472    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:49.477707    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:49.491837    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:49.535060    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:49.942148    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:49.975605    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:49.991387    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:50.039530    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:50.442209    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:50.475621    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:50.491288    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:50.534136    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:50.942042    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:50.976692    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:50.991461    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:51.034373    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:51.442918    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:51.475846    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:51.491533    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:51.534689    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:51.941724    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:51.976228    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:51.991579    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:52.034785    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:52.442538    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:52.484228    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:52.497331    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:52.534384    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:52.945204    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:52.988114    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:53.005793    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:53.034773    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:53.443646    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:53.475901    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:53.492097    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:53.540752    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:53.943366    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:53.977301    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:53.992875    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:54.036534    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:54.449539    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:54.478384    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:54.491192    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:54.540187    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:54.944796    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:54.978859    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:54.992759    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:55.034973    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:55.441801    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:55.475839    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:55.491727    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:55.539056    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:55.942707    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:55.979307    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:55.991211    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:56.034383    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:56.442363    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:56.475972    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:56.490791    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:56.542759    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:56.941468    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:56.975827    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:56.991516    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:57.037874    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:57.442033    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:57.476004    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:57.490546    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:57.534222    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:57.942338    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:57.975837    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:57.991110    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:58.034024    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:58.442183    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:58.477511    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:58.491500    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:58.535310    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:58.943116    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:58.979461    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:58.991769    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:59.035430    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:59.444310    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:59.478085    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:59.493795    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:03:59.536733    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:03:59.942867    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:03:59.978734    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:03:59.992458    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:00.045626    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:00.443739    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:00.479053    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:00.492665    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:00.535920    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:00.942672    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:00.980905    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:00.999704    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:01.034603    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:01.442280    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:01.476979    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:01.491480    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:01.534877    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:01.942719    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:01.978148    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:01.993520    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:02.033628    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:02.442439    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:02.476876    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:02.491133    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0811 23:04:02.533552    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:02.941891    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:02.976360    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:02.992531    8208 kapi.go:107] duration metric: took 1m4.041903732s to wait for kubernetes.io/minikube-addons=registry ...
	I0811 23:04:03.035129    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:03.442163    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:03.476632    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:03.533725    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:03.949410    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:03.977843    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:04.035571    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:04.441392    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:04.480813    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:04.535988    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:04.941609    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:04.975831    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:05.034659    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:05.441634    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:05.476791    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:05.534383    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:05.942099    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:05.976459    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:06.034925    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:06.441762    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:06.476817    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:06.534348    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:06.941903    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:06.982702    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:07.034502    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:07.441550    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0811 23:04:07.476941    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:07.534862    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:07.941582    8208 kapi.go:107] duration metric: took 1m4.570992176s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0811 23:04:07.944125    8208 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-557401 cluster.
	I0811 23:04:07.946241    8208 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0811 23:04:07.948313    8208 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0811 23:04:07.976058    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:08.033490    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:08.479260    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:08.534933    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:08.980584    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:09.037998    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:09.477271    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:09.534635    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:09.977912    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:10.048263    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:10.477213    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:10.534667    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:10.977262    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:11.034858    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:11.476578    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:11.535811    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:11.977815    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:12.036690    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:12.480588    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:12.535514    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:12.982910    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:13.034872    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:13.476906    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:13.535338    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:13.977604    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:14.039485    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:14.480986    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:14.534522    8208 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0811 23:04:14.977543    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:15.035707    8208 kapi.go:107] duration metric: took 1m15.523389947s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0811 23:04:15.476462    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:15.976517    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:16.476821    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:16.976015    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:17.478104    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:17.979246    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:18.476025    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:18.976059    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:19.477142    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:19.975651    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:20.477445    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:20.976044    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:21.475859    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:21.981891    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:22.481712    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:22.976950    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:23.475690    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:23.978869    8208 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0811 23:04:24.477256    8208 kapi.go:107] duration metric: took 1m24.546301373s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0811 23:04:24.479187    8208 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, cloud-spanner, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0811 23:04:24.481155    8208 addons.go:502] enable addons completed in 1m30.636179755s: enabled=[ingress-dns storage-provisioner default-storageclass inspektor-gadget cloud-spanner metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0811 23:04:24.481224    8208 start.go:233] waiting for cluster config update ...
	I0811 23:04:24.481243    8208 start.go:242] writing updated cluster config ...
	I0811 23:04:24.481588    8208 ssh_runner.go:195] Run: rm -f paused
	I0811 23:04:24.781585    8208 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0811 23:04:24.783618    8208 out.go:177] * Done! kubectl is now configured to use "addons-557401" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 11 23:07:40 addons-557401 crio[890]: time="2023-08-11 23:07:40.638269966Z" level=info msg="Removed container 318e04cf0437b58503d37ececd27a16cac4afd3f3b8219b1341cda9267052309: ingress-nginx/ingress-nginx-controller-7799c6795f-wq2tm/controller" id=fbfc3f71-683b-4272-8d5f-093e6f775983 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:40 addons-557401 crio[890]: time="2023-08-11 23:07:40.641010559Z" level=info msg="Removing container: 1f1812d131bcbd98c2008e623c3fc3cbbc7e3ececcf139bdf5709ccdb4b30700" id=b1bc0515-9f8c-44c1-8876-cd9dee3af365 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:40 addons-557401 crio[890]: time="2023-08-11 23:07:40.662770075Z" level=info msg="Removed container 1f1812d131bcbd98c2008e623c3fc3cbbc7e3ececcf139bdf5709ccdb4b30700: default/hello-world-app-65bdb79f98-b92s9/hello-world-app" id=b1bc0515-9f8c-44c1-8876-cd9dee3af365 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:40 addons-557401 crio[890]: time="2023-08-11 23:07:40.822115789Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=efb1553a-df49-490a-8116-5db43288ca6e name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:07:40 addons-557401 crio[890]: time="2023-08-11 23:07:40.822331422Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=efb1553a-df49-490a-8116-5db43288ca6e name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.038358187Z" level=info msg="Removing container: 4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221" id=076fa363-f2e8-40cd-b949-fa82a897870c name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.066137636Z" level=info msg="Removed container 4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221: ingress-nginx/ingress-nginx-admission-patch-xqj8j/patch" id=076fa363-f2e8-40cd-b949-fa82a897870c name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.067678323Z" level=info msg="Removing container: 4162fd3288d6b2b5fa1406d16022ff64cbf073a10c7ad9fe6339245db4c6e7ef" id=fa5d415e-638f-48f2-a02c-9326ad5b15fa name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.098152809Z" level=info msg="Removed container 4162fd3288d6b2b5fa1406d16022ff64cbf073a10c7ad9fe6339245db4c6e7ef: ingress-nginx/ingress-nginx-admission-create-h7tcx/create" id=fa5d415e-638f-48f2-a02c-9326ad5b15fa name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.099571287Z" level=info msg="Stopping pod sandbox: 24199c9581f37d8d56b218f386eea8419f11247c407ca97318866d53b1c9ef61" id=fd520ec6-5867-4a9c-9f4a-6d7393051762 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.099613191Z" level=info msg="Stopped pod sandbox (already stopped): 24199c9581f37d8d56b218f386eea8419f11247c407ca97318866d53b1c9ef61" id=fd520ec6-5867-4a9c-9f4a-6d7393051762 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.099927582Z" level=info msg="Removing pod sandbox: 24199c9581f37d8d56b218f386eea8419f11247c407ca97318866d53b1c9ef61" id=c611ad33-48e5-4151-8663-cd616dc4dd08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.108023600Z" level=info msg="Removed pod sandbox: 24199c9581f37d8d56b218f386eea8419f11247c407ca97318866d53b1c9ef61" id=c611ad33-48e5-4151-8663-cd616dc4dd08 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.108469800Z" level=info msg="Stopping pod sandbox: 2c1e89421fcf48e80d3b26f656ca08191c9f08cc82628ed069a07f4d4f233af3" id=5e4c9565-b9cd-4c9f-a418-54ad3cec01be name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.108598876Z" level=info msg="Stopped pod sandbox (already stopped): 2c1e89421fcf48e80d3b26f656ca08191c9f08cc82628ed069a07f4d4f233af3" id=5e4c9565-b9cd-4c9f-a418-54ad3cec01be name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.108913637Z" level=info msg="Removing pod sandbox: 2c1e89421fcf48e80d3b26f656ca08191c9f08cc82628ed069a07f4d4f233af3" id=2b9bb468-14f1-491f-aad4-a3b5b8a3d62b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.116170186Z" level=info msg="Removed pod sandbox: 2c1e89421fcf48e80d3b26f656ca08191c9f08cc82628ed069a07f4d4f233af3" id=2b9bb468-14f1-491f-aad4-a3b5b8a3d62b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.116662869Z" level=info msg="Stopping pod sandbox: 37efb8eecfd62919dcf7bbefc14945f5455b4d49d3b23c06b89b716e6e9b7aa8" id=cdb8a846-8e7b-4aca-95b3-ff83509a33f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.116701983Z" level=info msg="Stopped pod sandbox (already stopped): 37efb8eecfd62919dcf7bbefc14945f5455b4d49d3b23c06b89b716e6e9b7aa8" id=cdb8a846-8e7b-4aca-95b3-ff83509a33f3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.117076133Z" level=info msg="Removing pod sandbox: 37efb8eecfd62919dcf7bbefc14945f5455b4d49d3b23c06b89b716e6e9b7aa8" id=8c4f5d63-cf2a-4d62-a725-354326181182 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.123903148Z" level=info msg="Removed pod sandbox: 37efb8eecfd62919dcf7bbefc14945f5455b4d49d3b23c06b89b716e6e9b7aa8" id=8c4f5d63-cf2a-4d62-a725-354326181182 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.124341282Z" level=info msg="Stopping pod sandbox: 3cc9873be546766cecf79407971da4b99c557fdfa41d8850778a5dad0da3b828" id=381c855b-a076-4bce-8b77-8ba932be3230 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.124373127Z" level=info msg="Stopped pod sandbox (already stopped): 3cc9873be546766cecf79407971da4b99c557fdfa41d8850778a5dad0da3b828" id=381c855b-a076-4bce-8b77-8ba932be3230 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.124743305Z" level=info msg="Removing pod sandbox: 3cc9873be546766cecf79407971da4b99c557fdfa41d8850778a5dad0da3b828" id=74983d1b-d5d2-4406-bbf1-b280dcca9014 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 11 23:07:41 addons-557401 crio[890]: time="2023-08-11 23:07:41.136329586Z" level=info msg="Removed pod sandbox: 3cc9873be546766cecf79407971da4b99c557fdfa41d8850778a5dad0da3b828" id=74983d1b-d5d2-4406-bbf1-b280dcca9014 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d18eb6e89eaec       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                               7 seconds ago       Exited              hello-world-app           2                   2932ea12c5255       hello-world-app-65bdb79f98-b92s9
	5c52b38c18944       docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c                2 minutes ago       Running             nginx                     0                   204e2a9c29162       nginx
	34ead65fc97bf       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98          3 minutes ago       Running             headlamp                  0                   f62cf3798849c       headlamp-5c78f74d8d-dnjhl
	08cfe3878cec2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   3 minutes ago       Running             gcp-auth                  0                   b37913a839dd4       gcp-auth-58478865f7-dcc9m
	7c3cb13bf6c34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               4 minutes ago       Running             storage-provisioner       0                   0e40425d1f72a       storage-provisioner
	a010235e861cc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               4 minutes ago       Running             coredns                   0                   aadcba1301ac1       coredns-5d78c9869d-ftx4z
	7b7cd8a9dbd03       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                               4 minutes ago       Running             kube-proxy                0                   6dbbaf81b5e92       kube-proxy-cmkc6
	bc06652771fe6       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                               4 minutes ago       Running             kindnet-cni               0                   c0db06d0d165b       kindnet-2c4dk
	8c46658008852       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388                                               5 minutes ago       Running             kube-apiserver            0                   ec1d422d3cf67       kube-apiserver-addons-557401
	7088759248cfe       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                               5 minutes ago       Running             kube-controller-manager   0                   0e46965d62259       kube-controller-manager-addons-557401
	f5bcf932c2960       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                               5 minutes ago       Running             etcd                      0                   f59a03a940fdb       etcd-addons-557401
	952f71f8d9582       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                               5 minutes ago       Running             kube-scheduler            0                   d1824a58ed139       kube-scheduler-addons-557401
	
	* 
	* ==> coredns [a010235e861ccfeb6af4693e8ce62517c9c3df013290f03d03b9a01546029f58] <==
	* [INFO] 10.244.0.17:52250 - 24799 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043266s
	[INFO] 10.244.0.17:52250 - 39037 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057544s
	[INFO] 10.244.0.17:52250 - 3017 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062147s
	[INFO] 10.244.0.17:52250 - 54640 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038884s
	[INFO] 10.244.0.17:52250 - 30742 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001178007s
	[INFO] 10.244.0.17:52250 - 2580 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000847534s
	[INFO] 10.244.0.17:52250 - 35978 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065888s
	[INFO] 10.244.0.17:55168 - 17227 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048789s
	[INFO] 10.244.0.17:38939 - 56069 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000202218s
	[INFO] 10.244.0.17:38939 - 31301 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006556s
	[INFO] 10.244.0.17:38939 - 3739 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054565s
	[INFO] 10.244.0.17:38939 - 228 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041822s
	[INFO] 10.244.0.17:38939 - 10576 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052973s
	[INFO] 10.244.0.17:38939 - 16639 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054228s
	[INFO] 10.244.0.17:55168 - 42710 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000183986s
	[INFO] 10.244.0.17:55168 - 7061 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062606s
	[INFO] 10.244.0.17:55168 - 41454 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070024s
	[INFO] 10.244.0.17:55168 - 37208 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060005s
	[INFO] 10.244.0.17:38939 - 5504 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001208547s
	[INFO] 10.244.0.17:55168 - 53894 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052152s
	[INFO] 10.244.0.17:38939 - 6485 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001221331s
	[INFO] 10.244.0.17:55168 - 39265 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000999322s
	[INFO] 10.244.0.17:38939 - 57690 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000080551s
	[INFO] 10.244.0.17:55168 - 36394 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000865782s
	[INFO] 10.244.0.17:55168 - 19784 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098709s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-557401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-557401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=addons-557401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_02_41_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-557401
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:02:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-557401
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:07:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:07:47 +0000   Fri, 11 Aug 2023 23:02:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:07:47 +0000   Fri, 11 Aug 2023 23:02:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:07:47 +0000   Fri, 11 Aug 2023 23:02:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:07:47 +0000   Fri, 11 Aug 2023 23:03:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-557401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f245fc7657a4d468a8f3b0bb5b2a20d
	  System UUID:                a0720725-218a-43fb-8f95-cf5657aea2cf
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-b92s9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-58478865f7-dcc9m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  headlamp                    headlamp-5c78f74d8d-dnjhl                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 coredns-5d78c9869d-ftx4z                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m54s
	  kube-system                 etcd-addons-557401                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-2c4dk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m54s
	  kube-system                 kube-apiserver-addons-557401             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-addons-557401    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-cmkc6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-scheduler-addons-557401             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node addons-557401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node addons-557401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x8 over 5m14s)  kubelet          Node addons-557401 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m7s                   kubelet          Node addons-557401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s                   kubelet          Node addons-557401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s                   kubelet          Node addons-557401 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m54s                  node-controller  Node addons-557401 event: Registered Node addons-557401 in Controller
	  Normal  NodeReady                4m21s                  kubelet          Node addons-557401 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug11 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015089] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.135994] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.200050] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [f5bcf932c296048aecdb0c10bf39bdf51d45bc8af914b8d98b187edc7b24b1f6] <==
	* {"level":"info","ts":"2023-08-11T23:02:34.165Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:02:34.165Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-11T23:02:34.165Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-11T23:02:34.166Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-11T23:02:34.166Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-08-11T23:02:34.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-11T23:02:34.741Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-557401 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:02:34.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:02:34.742Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:02:34.742Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:02:34.746Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:02:34.747Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-11T23:02:34.757Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:02:34.759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:02:34.759Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:02:34.758Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:02:34.765Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-11T23:02:56.942Z","caller":"traceutil/trace.go:171","msg":"trace[1816262593] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"120.631162ms","start":"2023-08-11T23:02:56.821Z","end":"2023-08-11T23:02:56.942Z","steps":["trace[1816262593] 'process raft request'  (duration: 76.451999ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-11T23:02:57.547Z","caller":"traceutil/trace.go:171","msg":"trace[226360573] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"104.536914ms","start":"2023-08-11T23:02:57.442Z","end":"2023-08-11T23:02:57.547Z","steps":["trace[226360573] 'process raft request'  (duration: 67.542488ms)","trace[226360573] 'compare'  (duration: 27.37989ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [08cfe3878cec29c70564482eea944ee2e0d3138ffc7095d6d9e7a60f4abbf4dd] <==
	* 2023/08/11 23:04:06 GCP Auth Webhook started!
	2023/08/11 23:04:32 Ready to marshal response ...
	2023/08/11 23:04:32 Ready to write response ...
	2023/08/11 23:04:32 Ready to marshal response ...
	2023/08/11 23:04:32 Ready to write response ...
	2023/08/11 23:04:32 Ready to marshal response ...
	2023/08/11 23:04:32 Ready to write response ...
	2023/08/11 23:04:35 Ready to marshal response ...
	2023/08/11 23:04:35 Ready to write response ...
	2023/08/11 23:04:49 Ready to marshal response ...
	2023/08/11 23:04:49 Ready to write response ...
	2023/08/11 23:05:00 Ready to marshal response ...
	2023/08/11 23:05:00 Ready to write response ...
	2023/08/11 23:05:06 Ready to marshal response ...
	2023/08/11 23:05:06 Ready to write response ...
	2023/08/11 23:07:21 Ready to marshal response ...
	2023/08/11 23:07:21 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:07:48 up 50 min,  0 users,  load average: 0.23, 0.79, 0.48
	Linux addons-557401 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [bc06652771fe6bce319b9b0610ab27ce60f461718e03f6d7a4ae510674113889] <==
	* I0811 23:05:46.820042       1 main.go:227] handling current node
	I0811 23:05:56.830269       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:05:56.830296       1 main.go:227] handling current node
	I0811 23:06:06.842867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:06.842891       1 main.go:227] handling current node
	I0811 23:06:16.854171       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:16.854198       1 main.go:227] handling current node
	I0811 23:06:26.861607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:26.861639       1 main.go:227] handling current node
	I0811 23:06:36.866661       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:36.866688       1 main.go:227] handling current node
	I0811 23:06:46.878581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:46.878608       1 main.go:227] handling current node
	I0811 23:06:56.883148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:06:56.883177       1 main.go:227] handling current node
	I0811 23:07:06.894992       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:07:06.895017       1 main.go:227] handling current node
	I0811 23:07:16.909299       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:07:16.909327       1 main.go:227] handling current node
	I0811 23:07:26.921770       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:07:26.921795       1 main.go:227] handling current node
	I0811 23:07:36.933026       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:07:36.933055       1 main.go:227] handling current node
	I0811 23:07:46.945326       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:07:46.945520       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [8c46658008852eccc347b134a8c47531e6a53a4bb481f518bbd000c8c917a6fd] <==
	* I0811 23:05:21.187268       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0811 23:05:21.191469       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0811 23:05:21.191517       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0811 23:05:21.199498       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0811 23:05:21.199647       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0811 23:05:21.227890       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0811 23:05:21.227937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0811 23:05:21.228078       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0811 23:05:21.228107       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0811 23:05:22.201287       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0811 23:05:22.228869       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0811 23:05:22.245204       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0811 23:05:38.660299       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0811 23:05:38.660338       1 handler_proxy.go:100] no RequestInfo found in the context
	E0811 23:05:38.660375       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0811 23:05:38.660389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0811 23:05:38.678542       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0811 23:06:38.661521       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0811 23:06:38.661548       1 handler_proxy.go:100] no RequestInfo found in the context
	E0811 23:06:38.661589       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0811 23:06:38.661597       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0811 23:07:22.161765       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.99.249.26]
	E0811 23:07:38.627194       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400c4a62d0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400458d720), ResponseWriter:(*httpsnoop.rw)(0x400458d720), Flusher:(*httpsnoop.rw)(0x400458d720), CloseNotifier:(*httpsnoop.rw)(0x400458d720), Pusher:(*httpsnoop.rw)(0x400458d720)}}, encoder:(*versioning.codec)(0x400c571b80), memAllocator:(*runtime.Allocator)(0x400b421920)})
	
	* 
	* ==> kube-controller-manager [7088759248cfe8a6ec44cf7268a6648109e7d4969208ef7d2f7dd8cc6945d9ad] <==
	* E0811 23:05:59.086130       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:05:59.910728       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:05:59.910760       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:06:00.809938       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:06:00.809972       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:06:27.732561       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:06:27.732593       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:06:30.814305       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:06:30.814348       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:06:39.109690       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:06:39.109735       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:06:39.325110       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:06:39.325146       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:07:06.573359       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:07:06.573393       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:07:16.248265       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:07:16.248296       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0811 23:07:21.905013       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0811 23:07:21.936644       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-b92s9"
	W0811 23:07:30.457264       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:07:30.457299       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0811 23:07:30.474077       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0811 23:07:30.474180       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0811 23:07:39.336928       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0811 23:07:39.349856       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [7b7cd8a9dbd03b01246d58aed7dc43a4cf4db16fbb792efdcda3c75d1339bcac] <==
	* I0811 23:02:58.877328       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0811 23:02:58.893179       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0811 23:02:58.893303       1 server_others.go:554] "Using iptables proxy"
	I0811 23:02:59.093052       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:02:59.093172       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0811 23:02:59.093204       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0811 23:02:59.093254       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0811 23:02:59.093353       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:02:59.093908       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:02:59.094149       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:02:59.094894       1 config.go:188] "Starting service config controller"
	I0811 23:02:59.095000       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:02:59.095074       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:02:59.095102       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:02:59.095636       1 config.go:315] "Starting node config controller"
	I0811 23:02:59.095724       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:02:59.196805       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:02:59.197158       1 shared_informer.go:318] Caches are synced for node config
	I0811 23:02:59.197179       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [952f71f8d9582c38fdf6d7a2d7bbf0fa7fbaab3a4887f72c3789ce803cd57189] <==
	* W0811 23:02:38.258028       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 23:02:38.258553       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0811 23:02:38.258066       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0811 23:02:38.258636       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0811 23:02:38.271655       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 23:02:38.271970       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0811 23:02:38.271790       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0811 23:02:38.272716       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0811 23:02:38.271826       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 23:02:38.272784       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0811 23:02:38.271858       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 23:02:38.272800       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0811 23:02:38.271899       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 23:02:38.272815       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0811 23:02:38.271934       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 23:02:38.272836       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0811 23:02:38.274718       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 23:02:38.274982       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0811 23:02:38.274857       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 23:02:38.275077       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0811 23:02:38.274921       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 23:02:38.275162       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0811 23:02:38.274952       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 23:02:38.275236       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0811 23:02:39.449489       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 11 23:07:40 addons-557401 kubelet[1359]: I0811 23:07:40.837161    1359 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f635f399-3597-4bbf-93b8-16a1b3e4ab5a path="/var/lib/kubelet/pods/f635f399-3597-4bbf-93b8-16a1b3e4ab5a/volumes"
	Aug 11 23:07:41 addons-557401 kubelet[1359]: W0811 23:07:41.035275    1359 machine.go:65] Cannot read vendor id correctly, set empty.
	Aug 11 23:07:41 addons-557401 kubelet[1359]: I0811 23:07:41.037269    1359 scope.go:115] "RemoveContainer" containerID="4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221"
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.040894    1359 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928, memory: /docker/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/system.slice/kubelet.service"
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.047567    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/267fab5743e72debdb5f6c61d211bafd4cad033d1685320c11401951cff621c5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/267fab5743e72debdb5f6c61d211bafd4cad033d1685320c11401951cff621c5/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.049023    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eefeee0e38229eaf978723a4e4409970c97d96e4657672b63747a8440e8bfcb7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eefeee0e38229eaf978723a4e4409970c97d96e4657672b63747a8440e8bfcb7/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-ingress-dns-minikube_98138bc5-f73b-49bc-967d-a1b554ab9ae6/minikube-ingress-dns/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-ingress-dns-minikube_98138bc5-f73b-49bc-967d-a1b554ab9ae6/minikube-ingress-dns/5.log: no such file or directory
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.049281    1359 manager.go:1106] Failed to create existing container: /docker/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/crio-0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908: Error finding container 0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908: Status 404 returned error can't find the container with id 0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.049578    1359 manager.go:1106] Failed to create existing container: /docker/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/crio/crio-4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221: Error finding container 4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221: Status 404 returned error can't find the container with id 4b55f762fb6403af40d200312e77e7380f941e0dcde5c88247bcd9f07f7fe221
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.049896    1359 manager.go:1106] Failed to create existing container: /crio-0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908: Error finding container 0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908: Status 404 returned error can't find the container with id 0fd341bea6e18bca5e39c4b9a607a6c487771260336967dc55c64605f439d908
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.055906    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9f13d6e419b6ecce25b5b9e89c054819d009feea770c1a42f004fe125e555a5d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9f13d6e419b6ecce25b5b9e89c054819d009feea770c1a42f004fe125e555a5d/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.058457    1359 manager.go:1106] Failed to create existing container: /docker/047bc6397f6f8d5b75e761ed1cb023e71c9b1d0ae0d058c79d79319534c04928/crio-85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7: Error finding container 85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7: Status 404 returned error can't find the container with id 85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.060145    1359 manager.go:1106] Failed to create existing container: /crio-85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7: Error finding container 85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7: Status 404 returned error can't find the container with id 85c685377d63c386a2e665071e5ed39fc18c59a9bfee9f3055de61b5a37d82f7
	Aug 11 23:07:41 addons-557401 kubelet[1359]: I0811 23:07:41.066575    1359 scope.go:115] "RemoveContainer" containerID="4162fd3288d6b2b5fa1406d16022ff64cbf073a10c7ad9fe6339245db4c6e7ef"
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.069138    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-create-h7tcx_aaec31f4-afcc-4595-a126-b29bef6f7bd3/create/0.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-create-h7tcx_aaec31f4-afcc-4595-a126-b29bef6f7bd3/create/0.log: no such file or directory
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.075929    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d167b7dc346354cb372b7c359c3d5929b669fc1260b8fe29f0b4b1423b68b835/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d167b7dc346354cb372b7c359c3d5929b669fc1260b8fe29f0b4b1423b68b835/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.075977    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d14b869c2c533e5d2d4f6f648b48064ba9251d074b9795f534651d941f7fd04a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d14b869c2c533e5d2d4f6f648b48064ba9251d074b9795f534651d941f7fd04a/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.078401    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61610fa797800b5a342440f34cbf2a80a34989cb7b5bc57c271c260e62f79c25/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61610fa797800b5a342440f34cbf2a80a34989cb7b5bc57c271c260e62f79c25/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.081045    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a44a3d9cd0816f521fb170df45d431bfbdfbd4638a051a096bbae2aca524bdca/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a44a3d9cd0816f521fb170df45d431bfbdfbd4638a051a096bbae2aca524bdca/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.082192    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/daefffa1a644e435ef0d6af49dfb42fbb8e9acfa04c9953c541b418f2e99ee21/diff" to get inode usage: stat /var/lib/containers/storage/overlay/daefffa1a644e435ef0d6af49dfb42fbb8e9acfa04c9953c541b418f2e99ee21/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.085527    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0ece70c2f0c0c44547b97378d30b424831a2e6995a2c74339345cc827a3b75f4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0ece70c2f0c0c44547b97378d30b424831a2e6995a2c74339345cc827a3b75f4/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.085574    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d38971f4668d0a915f234767e442e20f219594c483ccfb767cdd293e7c410b80/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d38971f4668d0a915f234767e442e20f219594c483ccfb767cdd293e7c410b80/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.085588    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/61610fa797800b5a342440f34cbf2a80a34989cb7b5bc57c271c260e62f79c25/diff" to get inode usage: stat /var/lib/containers/storage/overlay/61610fa797800b5a342440f34cbf2a80a34989cb7b5bc57c271c260e62f79c25/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.086679    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/daefffa1a644e435ef0d6af49dfb42fbb8e9acfa04c9953c541b418f2e99ee21/diff" to get inode usage: stat /var/lib/containers/storage/overlay/daefffa1a644e435ef0d6af49dfb42fbb8e9acfa04c9953c541b418f2e99ee21/diff: no such file or directory, extraDiskErr: <nil>
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.105838    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/64ca28d96f4a4f2d1d73083b5d86dbf3c2255c1a187011848bb35cd712ec7371/diff" to get inode usage: stat /var/lib/containers/storage/overlay/64ca28d96f4a4f2d1d73083b5d86dbf3c2255c1a187011848bb35cd712ec7371/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-xqj8j_f635f399-3597-4bbf-93b8-16a1b3e4ab5a/patch/2.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-xqj8j_f635f399-3597-4bbf-93b8-16a1b3e4ab5a/patch/2.log: no such file or directory
	Aug 11 23:07:41 addons-557401 kubelet[1359]: E0811 23:07:41.107980    1359 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/50f1d866f345ba66ca513f51e1a278b846699787ecb8faae2e7febc8af28f7bf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/50f1d866f345ba66ca513f51e1a278b846699787ecb8faae2e7febc8af28f7bf/diff: no such file or directory, extraDiskErr: <nil>
	
	* 
	* ==> storage-provisioner [7c3cb13bf6c34c9399e915fceb825e5f5c5617ef2403ae0f4e711f733f6eda1a] <==
	* I0811 23:03:27.928978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 23:03:27.948244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 23:03:27.948329       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 23:03:27.958813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 23:03:27.959043       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-557401_e6137bde-0141-426c-8b8b-f989144bae38!
	I0811 23:03:27.960066       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee548325-5aaf-4e7b-b98a-e42060153f92", APIVersion:"v1", ResourceVersion:"847", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-557401_e6137bde-0141-426c-8b8b-f989144bae38 became leader
	I0811 23:03:28.059431       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-557401_e6137bde-0141-426c-8b8b-f989144bae38!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-557401 -n addons-557401
helpers_test.go:261: (dbg) Run:  kubectl --context addons-557401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (170.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-200414 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-200414 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.649632696s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-200414 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-200414 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66dbc3c9-d407-407e-ada3-574b903f8a53] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66dbc3c9-d407-407e-ada3-574b903f8a53] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.017710298s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0811 23:16:59.473801    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.479144    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.489428    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.509729    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.550011    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.630367    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:16:59.790783    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:00.119554    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:00.759919    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:02.040176    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:04.600743    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:09.721009    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:17:19.961234    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-200414 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.610678131s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-200414 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0811 23:17:40.442279    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.00427707s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons disable ingress-dns --alsologtostderr -v=1: (1.205888877s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons disable ingress --alsologtostderr -v=1: (7.55306237s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-200414
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-200414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb",
	        "Created": "2023-08-11T23:13:34.76919911Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 35428,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:13:35.096206953Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:abe4482d178dd08cce0cdcb8e444349673c3edfa8e7d6462144a8d9173479eb6",
	        "ResolvConfPath": "/var/lib/docker/containers/8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb/hostname",
	        "HostsPath": "/var/lib/docker/containers/8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb/hosts",
	        "LogPath": "/var/lib/docker/containers/8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb/8fd44b754561f655224224b133b90a0718ee103ba1ba2126691eeb758a5e2bdb-json.log",
	        "Name": "/ingress-addon-legacy-200414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-200414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-200414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6f26bc0452296f83ffef6ea9de49303e24f4a9e83466eaa188f674790e70b471-init/diff:/var/lib/docker/overlay2/9f8bf17bd2eed1bf502486fc30f9be0589884e58aed50b5fbf77bc48ebc9a592/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f26bc0452296f83ffef6ea9de49303e24f4a9e83466eaa188f674790e70b471/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f26bc0452296f83ffef6ea9de49303e24f4a9e83466eaa188f674790e70b471/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f26bc0452296f83ffef6ea9de49303e24f4a9e83466eaa188f674790e70b471/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-200414",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-200414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-200414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-200414",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-200414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "305334599d200de26e101f56d60775a19ccfe54a36a56ea9ee3472f7e841b8f3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/305334599d20",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-200414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8fd44b754561",
	                        "ingress-addon-legacy-200414"
	                    ],
	                    "NetworkID": "2269d5dd1abe33e4915cf31394ff4ca90aaa43f44f64f5ffff435afb9e3049a2",
	                    "EndpointID": "b4b4527b64eb33d5a4daed1930fd4ac721651d445380fa69ef4f5745d004fedc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-200414 -n ingress-addon-legacy-200414
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-200414 logs -n 25: (1.355100672s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-327081 image load --daemon                                  | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-327081               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081 image ls                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	| image   | functional-327081 image load --daemon                                  | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-327081               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081 image ls                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	| image   | functional-327081 image save                                           | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-327081               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081 image rm                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-327081               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081 image ls                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	| image   | functional-327081 image load                                           | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081 image ls                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:12 UTC |
	| image   | functional-327081 image save --daemon                                  | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:12 UTC | 11 Aug 23 23:13 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-327081               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081                                                      | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081                                                      | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081                                                      | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-327081                                                      | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-327081 ssh pgrep                                            | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-327081 image build -t                                       | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	|         | localhost/my-image:functional-327081                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-327081 image ls                                             | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	| delete  | -p functional-327081                                                   | functional-327081           | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:13 UTC |
	| start   | -p ingress-addon-legacy-200414                                         | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:13 UTC | 11 Aug 23 23:14 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-200414                                            | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:14 UTC | 11 Aug 23 23:14 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-200414                                            | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:14 UTC | 11 Aug 23 23:14 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-200414                                            | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:15 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-200414 ip                                         | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:17 UTC | 11 Aug 23 23:17 UTC |
	| addons  | ingress-addon-legacy-200414                                            | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:17 UTC | 11 Aug 23 23:17 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-200414                                            | ingress-addon-legacy-200414 | jenkins | v1.31.1 | 11 Aug 23 23:17 UTC | 11 Aug 23 23:17 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:13:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:13:06.720435   34971 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:13:06.720627   34971 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:13:06.720656   34971 out.go:309] Setting ErrFile to fd 2...
	I0811 23:13:06.720675   34971 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:13:06.720944   34971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:13:06.721392   34971 out.go:303] Setting JSON to false
	I0811 23:13:06.722388   34971 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3335,"bootTime":1691792252,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:13:06.722478   34971 start.go:138] virtualization:  
	I0811 23:13:06.725013   34971 out.go:177] * [ingress-addon-legacy-200414] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:13:06.727134   34971 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:13:06.729012   34971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:13:06.727274   34971 notify.go:220] Checking for updates...
	I0811 23:13:06.730902   34971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:13:06.732707   34971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:13:06.734480   34971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:13:06.735922   34971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:13:06.737487   34971 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:13:06.760373   34971 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:13:06.760468   34971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:13:06.853394   34971 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-11 23:13:06.843150334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:13:06.853505   34971 docker.go:294] overlay module found
	I0811 23:13:06.855261   34971 out.go:177] * Using the docker driver based on user configuration
	I0811 23:13:06.857061   34971 start.go:298] selected driver: docker
	I0811 23:13:06.857077   34971 start.go:901] validating driver "docker" against <nil>
	I0811 23:13:06.857134   34971 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:13:06.857785   34971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:13:06.929386   34971 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-11 23:13:06.919286648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:13:06.929550   34971 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:13:06.929779   34971 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 23:13:06.931958   34971 out.go:177] * Using Docker driver with root privileges
	I0811 23:13:06.933814   34971 cni.go:84] Creating CNI manager for ""
	I0811 23:13:06.933830   34971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:13:06.933845   34971 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:13:06.933859   34971 start_flags.go:319] config:
	{Name:ingress-addon-legacy-200414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200414 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:13:06.937449   34971 out.go:177] * Starting control plane node ingress-addon-legacy-200414 in cluster ingress-addon-legacy-200414
	I0811 23:13:06.939363   34971 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:13:06.941229   34971 out.go:177] * Pulling base image ...
	I0811 23:13:06.942931   34971 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0811 23:13:06.943005   34971 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:13:06.960806   34971 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:13:06.960852   34971 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:13:07.008432   34971 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0811 23:13:07.008460   34971 cache.go:57] Caching tarball of preloaded images
	I0811 23:13:07.008674   34971 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0811 23:13:07.012242   34971 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0811 23:13:07.013969   34971 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:13:07.138837   34971 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0811 23:13:26.759416   34971 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:13:26.759521   34971 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:13:27.936934   34971 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0811 23:13:27.937338   34971 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/config.json ...
	I0811 23:13:27.937376   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/config.json: {Name:mk4613a1de75ba8ff665d4cf32ff20cb1aaf7420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:27.937678   34971 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:13:27.937737   34971 start.go:365] acquiring machines lock for ingress-addon-legacy-200414: {Name:mkf14a7beb18f859c62e1fd190f759b5cbb734ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:13:27.937802   34971 start.go:369] acquired machines lock for "ingress-addon-legacy-200414" in 51.003µs
	I0811 23:13:27.937824   34971 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-200414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200414 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:13:27.937899   34971 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:13:27.939968   34971 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0811 23:13:27.940206   34971 start.go:159] libmachine.API.Create for "ingress-addon-legacy-200414" (driver="docker")
	I0811 23:13:27.940241   34971 client.go:168] LocalClient.Create starting
	I0811 23:13:27.940325   34971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:13:27.940358   34971 main.go:141] libmachine: Decoding PEM data...
	I0811 23:13:27.940377   34971 main.go:141] libmachine: Parsing certificate...
	I0811 23:13:27.940432   34971 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:13:27.940456   34971 main.go:141] libmachine: Decoding PEM data...
	I0811 23:13:27.940470   34971 main.go:141] libmachine: Parsing certificate...
	I0811 23:13:27.940806   34971 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-200414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:13:27.959014   34971 cli_runner.go:211] docker network inspect ingress-addon-legacy-200414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:13:27.959101   34971 network_create.go:281] running [docker network inspect ingress-addon-legacy-200414] to gather additional debugging logs...
	I0811 23:13:27.959122   34971 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-200414
	W0811 23:13:27.976117   34971 cli_runner.go:211] docker network inspect ingress-addon-legacy-200414 returned with exit code 1
	I0811 23:13:27.976150   34971 network_create.go:284] error running [docker network inspect ingress-addon-legacy-200414]: docker network inspect ingress-addon-legacy-200414: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-200414 not found
	I0811 23:13:27.976165   34971 network_create.go:286] output of [docker network inspect ingress-addon-legacy-200414]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-200414 not found
	
	** /stderr **
	I0811 23:13:27.976228   34971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:13:27.994288   34971 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000adb440}
	I0811 23:13:27.994324   34971 network_create.go:123] attempt to create docker network ingress-addon-legacy-200414 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0811 23:13:27.994384   34971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-200414 ingress-addon-legacy-200414
	I0811 23:13:28.073787   34971 network_create.go:107] docker network ingress-addon-legacy-200414 192.168.49.0/24 created
	I0811 23:13:28.073818   34971 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-200414" container
	I0811 23:13:28.073891   34971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:13:28.090440   34971 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-200414 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200414 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:13:28.108143   34971 oci.go:103] Successfully created a docker volume ingress-addon-legacy-200414
	I0811 23:13:28.108233   34971 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-200414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200414 --entrypoint /usr/bin/test -v ingress-addon-legacy-200414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:13:29.631075   34971 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-200414-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200414 --entrypoint /usr/bin/test -v ingress-addon-legacy-200414:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib: (1.522800758s)
	I0811 23:13:29.631104   34971 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-200414
	I0811 23:13:29.631123   34971 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0811 23:13:29.631144   34971 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:13:29.631226   34971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-200414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:13:34.684264   34971 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-200414:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (5.053001967s)
	I0811 23:13:34.684294   34971 kic.go:199] duration metric: took 5.053147 seconds to extract preloaded images to volume
	W0811 23:13:34.684417   34971 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:13:34.684539   34971 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:13:34.752982   34971 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-200414 --name ingress-addon-legacy-200414 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200414 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-200414 --network ingress-addon-legacy-200414 --ip 192.168.49.2 --volume ingress-addon-legacy-200414:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:13:35.106391   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Running}}
	I0811 23:13:35.137391   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:13:35.162814   34971 cli_runner.go:164] Run: docker exec ingress-addon-legacy-200414 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:13:35.252230   34971 oci.go:144] the created container "ingress-addon-legacy-200414" has a running status.
	I0811 23:13:35.252254   34971 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa...
	I0811 23:13:35.768316   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 23:13:35.768436   34971 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:13:35.795681   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:13:35.837608   34971 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:13:35.837626   34971 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-200414 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:13:35.936732   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:13:35.970322   34971 machine.go:88] provisioning docker machine ...
	I0811 23:13:35.970365   34971 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-200414"
	I0811 23:13:35.970433   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:35.994973   34971 main.go:141] libmachine: Using SSH client type: native
	I0811 23:13:35.995581   34971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0811 23:13:35.995604   34971 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-200414 && echo "ingress-addon-legacy-200414" | sudo tee /etc/hostname
	I0811 23:13:36.202000   34971 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-200414
	
	I0811 23:13:36.202094   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:36.236396   34971 main.go:141] libmachine: Using SSH client type: native
	I0811 23:13:36.236848   34971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0811 23:13:36.236873   34971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-200414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-200414/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-200414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:13:36.387881   34971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:13:36.387910   34971 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:13:36.387929   34971 ubuntu.go:177] setting up certificates
	I0811 23:13:36.387940   34971 provision.go:83] configureAuth start
	I0811 23:13:36.388020   34971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200414
	I0811 23:13:36.412871   34971 provision.go:138] copyHostCerts
	I0811 23:13:36.412915   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:13:36.412952   34971 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:13:36.412962   34971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:13:36.413042   34971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:13:36.413196   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:13:36.413223   34971 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:13:36.413233   34971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:13:36.413272   34971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:13:36.413332   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:13:36.413352   34971 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:13:36.413356   34971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:13:36.413395   34971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:13:36.413459   34971 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-200414 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-200414]
	I0811 23:13:36.699488   34971 provision.go:172] copyRemoteCerts
	I0811 23:13:36.699552   34971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:13:36.699593   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:36.717257   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:13:36.819652   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:13:36.819733   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:13:36.849043   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:13:36.849147   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:13:36.877334   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:13:36.877395   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0811 23:13:36.905596   34971 provision.go:86] duration metric: configureAuth took 517.631327ms
	I0811 23:13:36.905620   34971 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:13:36.905811   34971 config.go:182] Loaded profile config "ingress-addon-legacy-200414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0811 23:13:36.905910   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:36.923456   34971 main.go:141] libmachine: Using SSH client type: native
	I0811 23:13:36.923888   34971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0811 23:13:36.923906   34971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:13:37.211005   34971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:13:37.211024   34971 machine.go:91] provisioned docker machine in 1.240669318s
	I0811 23:13:37.211034   34971 client.go:171] LocalClient.Create took 9.27078739s
	I0811 23:13:37.211049   34971 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-200414" took 9.270842094s
	I0811 23:13:37.211065   34971 start.go:300] post-start starting for "ingress-addon-legacy-200414" (driver="docker")
	I0811 23:13:37.211077   34971 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:13:37.211158   34971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:13:37.211202   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:37.230120   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:13:37.341490   34971 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:13:37.346450   34971 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:13:37.346490   34971 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:13:37.346501   34971 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:13:37.346508   34971 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:13:37.346517   34971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:13:37.346585   34971 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:13:37.346672   34971 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:13:37.346683   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /etc/ssl/certs/76342.pem
	I0811 23:13:37.346799   34971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:13:37.359120   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:13:37.389315   34971 start.go:303] post-start completed in 178.234088ms
	I0811 23:13:37.389691   34971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200414
	I0811 23:13:37.413092   34971 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/config.json ...
	I0811 23:13:37.413387   34971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:13:37.413436   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:37.432149   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:13:37.531209   34971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:13:37.537004   34971 start.go:128] duration metric: createHost completed in 9.599092843s
	I0811 23:13:37.537029   34971 start.go:83] releasing machines lock for "ingress-addon-legacy-200414", held for 9.599216577s
	I0811 23:13:37.537125   34971 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200414
	I0811 23:13:37.554624   34971 ssh_runner.go:195] Run: cat /version.json
	I0811 23:13:37.554634   34971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:13:37.554674   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:37.554697   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:13:37.575424   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:13:37.580884   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:13:37.681894   34971 ssh_runner.go:195] Run: systemctl --version
	I0811 23:13:37.822381   34971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:13:37.977396   34971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:13:37.983000   34971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:13:38.008641   34971 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:13:38.008827   34971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:13:38.046770   34971 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:13:38.046865   34971 start.go:466] detecting cgroup driver to use...
	I0811 23:13:38.046927   34971 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:13:38.046995   34971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:13:38.067768   34971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:13:38.082497   34971 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:13:38.082609   34971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:13:38.098466   34971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:13:38.115402   34971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:13:38.213197   34971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:13:38.316966   34971 docker.go:212] disabling docker service ...
	I0811 23:13:38.317033   34971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:13:38.338254   34971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:13:38.351384   34971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:13:38.452698   34971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:13:38.561859   34971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:13:38.575963   34971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:13:38.595514   34971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0811 23:13:38.595576   34971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:13:38.607587   34971 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0811 23:13:38.607652   34971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:13:38.619639   34971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:13:38.631593   34971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:13:38.644328   34971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:13:38.655469   34971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:13:38.665659   34971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:13:38.675767   34971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:13:38.761467   34971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:13:38.881029   34971 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:13:38.881202   34971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:13:38.886182   34971 start.go:534] Will wait 60s for crictl version
	I0811 23:13:38.886285   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:38.890502   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:13:38.942396   34971 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:13:38.942527   34971 ssh_runner.go:195] Run: crio --version
	I0811 23:13:38.987485   34971 ssh_runner.go:195] Run: crio --version
	I0811 23:13:39.032517   34971 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0811 23:13:39.034574   34971 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-200414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:13:39.052498   34971 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0811 23:13:39.057069   34971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:13:39.071522   34971 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0811 23:13:39.071604   34971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:13:39.126415   34971 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0811 23:13:39.126486   34971 ssh_runner.go:195] Run: which lz4
	I0811 23:13:39.131413   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0811 23:13:39.131525   34971 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0811 23:13:39.136334   34971 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0811 23:13:39.136371   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0811 23:13:41.316082   34971 crio.go:444] Took 2.184600 seconds to copy over tarball
	I0811 23:13:41.316152   34971 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0811 23:13:44.058570   34971 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.742393276s)
	I0811 23:13:44.058636   34971 crio.go:451] Took 2.742528 seconds to extract the tarball
	I0811 23:13:44.058662   34971 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0811 23:13:44.146824   34971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:13:44.186448   34971 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0811 23:13:44.186472   34971 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0811 23:13:44.186557   34971 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0811 23:13:44.186823   34971 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0811 23:13:44.186912   34971 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0811 23:13:44.187005   34971 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0811 23:13:44.187278   34971 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0811 23:13:44.187354   34971 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0811 23:13:44.187619   34971 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0811 23:13:44.187797   34971 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0811 23:13:44.188139   34971 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:13:44.188336   34971 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0811 23:13:44.188493   34971 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0811 23:13:44.188685   34971 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0811 23:13:44.189481   34971 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0811 23:13:44.189668   34971 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0811 23:13:44.189798   34971 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:13:44.190051   34971 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0811 23:13:44.618803   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0811 23:13:44.636608   34971 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.636917   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0811 23:13:44.668193   34971 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.668441   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0811 23:13:44.679947   34971 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0811 23:13:44.680024   34971 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0811 23:13:44.680098   34971 ssh_runner.go:195] Run: which crictl
	W0811 23:13:44.681727   34971 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.681971   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0811 23:13:44.697393   34971 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.697650   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0811 23:13:44.699598   34971 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.699811   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0811 23:13:44.701584   34971 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.701812   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0811 23:13:44.724703   34971 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0811 23:13:44.724747   34971 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0811 23:13:44.724804   34971 ssh_runner.go:195] Run: which crictl
	W0811 23:13:44.805190   34971 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0811 23:13:44.805422   34971 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:13:44.817454   34971 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0811 23:13:44.817531   34971 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0811 23:13:44.817621   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:44.817729   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0811 23:13:44.824978   34971 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0811 23:13:44.825066   34971 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0811 23:13:44.825146   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:44.862645   34971 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0811 23:13:44.862729   34971 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0811 23:13:44.862808   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:44.871325   34971 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0811 23:13:44.871412   34971 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0811 23:13:44.871531   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:44.871612   34971 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0811 23:13:44.871775   34971 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0811 23:13:44.871827   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:44.871733   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0811 23:13:45.062781   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0811 23:13:45.062854   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0811 23:13:45.062903   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0811 23:13:45.062947   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0811 23:13:45.063030   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0811 23:13:45.063088   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0811 23:13:45.063131   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0811 23:13:45.063178   34971 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0811 23:13:45.063209   34971 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:13:45.063245   34971 ssh_runner.go:195] Run: which crictl
	I0811 23:13:45.191101   34971 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:13:45.191249   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0811 23:13:45.212729   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0811 23:13:45.223020   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0811 23:13:45.223094   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0811 23:13:45.223148   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0811 23:13:45.281817   34971 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0811 23:13:45.281888   34971 cache_images.go:92] LoadImages completed in 1.095403338s
	W0811 23:13:45.281967   34971 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0811 23:13:45.282046   34971 ssh_runner.go:195] Run: crio config
	I0811 23:13:45.352695   34971 cni.go:84] Creating CNI manager for ""
	I0811 23:13:45.352725   34971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:13:45.352770   34971 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:13:45.352797   34971 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-200414 NodeName:ingress-addon-legacy-200414 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0811 23:13:45.352960   34971 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-200414"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:13:45.353052   34971 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-200414 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200414 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:13:45.353136   34971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0811 23:13:45.364117   34971 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:13:45.364189   34971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:13:45.374487   34971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0811 23:13:45.395276   34971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0811 23:13:45.415798   34971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0811 23:13:45.436619   34971 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0811 23:13:45.440823   34971 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:13:45.453929   34971 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414 for IP: 192.168.49.2
	I0811 23:13:45.453967   34971 certs.go:190] acquiring lock for shared ca certs: {Name:mk92ef0e52f7a4bf6e55e35fe7431dc846a67439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:45.454107   34971 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key
	I0811 23:13:45.454156   34971 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key
	I0811 23:13:45.454202   34971 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key
	I0811 23:13:45.454217   34971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt with IP's: []
	I0811 23:13:45.794642   34971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt ...
	I0811 23:13:45.794672   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: {Name:mkd09d1d8106a47566430a3981d4be85041d8060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:45.794862   34971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key ...
	I0811 23:13:45.794873   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key: {Name:mk6368ff00fd61222fdd48a6db751ef68e94cc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:45.794957   34971 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key.dd3b5fb2
	I0811 23:13:45.794974   34971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 23:13:46.211479   34971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt.dd3b5fb2 ...
	I0811 23:13:46.211513   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt.dd3b5fb2: {Name:mk425ffdeb31015eae7b1d03d93360851562f28b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:46.211695   34971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key.dd3b5fb2 ...
	I0811 23:13:46.211707   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key.dd3b5fb2: {Name:mkf43533aa82d50df7c652fe3206f55bf9096b82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:46.211785   34971 certs.go:337] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt
	I0811 23:13:46.211870   34971 certs.go:341] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key
	I0811 23:13:46.211928   34971 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.key
	I0811 23:13:46.211943   34971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.crt with IP's: []
	I0811 23:13:46.574800   34971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.crt ...
	I0811 23:13:46.574830   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.crt: {Name:mkb6d81c843aa79317e3dfbe4dcfbb3c163e3b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:46.575017   34971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.key ...
	I0811 23:13:46.575031   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.key: {Name:mk7ca3c08716062b405694a2d98f989e762b5899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:13:46.575109   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 23:13:46.575133   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 23:13:46.575145   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 23:13:46.575157   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 23:13:46.575182   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:13:46.575197   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:13:46.575213   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:13:46.575228   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:13:46.575283   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem (1338 bytes)
	W0811 23:13:46.575320   34971 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634_empty.pem, impossibly tiny 0 bytes
	I0811 23:13:46.575334   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 23:13:46.575362   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem (1082 bytes)
	I0811 23:13:46.575392   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:13:46.575421   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem (1675 bytes)
	I0811 23:13:46.575481   34971 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:13:46.575512   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /usr/share/ca-certificates/76342.pem
	I0811 23:13:46.575533   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:13:46.575548   34971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem -> /usr/share/ca-certificates/7634.pem
	I0811 23:13:46.576131   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:13:46.605339   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 23:13:46.633182   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:13:46.661831   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 23:13:46.689648   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:13:46.718478   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0811 23:13:46.746591   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:13:46.773936   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 23:13:46.801985   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /usr/share/ca-certificates/76342.pem (1708 bytes)
	I0811 23:13:46.830299   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:13:46.858068   34971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem --> /usr/share/ca-certificates/7634.pem (1338 bytes)
	I0811 23:13:46.885540   34971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:13:46.905994   34971 ssh_runner.go:195] Run: openssl version
	I0811 23:13:46.912846   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76342.pem && ln -fs /usr/share/ca-certificates/76342.pem /etc/ssl/certs/76342.pem"
	I0811 23:13:46.924487   34971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76342.pem
	I0811 23:13:46.928968   34971 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:13:46.929111   34971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76342.pem
	I0811 23:13:46.937304   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76342.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:13:46.948762   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:13:46.960052   34971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:13:46.964639   34971 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:13:46.964745   34971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:13:46.973308   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:13:46.984747   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7634.pem && ln -fs /usr/share/ca-certificates/7634.pem /etc/ssl/certs/7634.pem"
	I0811 23:13:46.996141   34971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7634.pem
	I0811 23:13:47.000677   34971 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:13:47.000739   34971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7634.pem
	I0811 23:13:47.009888   34971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7634.pem /etc/ssl/certs/51391683.0"
	I0811 23:13:47.021661   34971 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:13:47.026037   34971 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:13:47.026131   34971 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-200414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200414 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:13:47.026228   34971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0811 23:13:47.026283   34971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0811 23:13:47.069811   34971 cri.go:89] found id: ""
	I0811 23:13:47.069890   34971 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:13:47.080768   34971 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:13:47.091908   34971 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0811 23:13:47.092022   34971 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:13:47.103078   34971 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:13:47.103129   34971 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 23:13:47.166639   34971 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0811 23:13:47.167280   34971 kubeadm.go:322] [preflight] Running pre-flight checks
	I0811 23:13:47.221591   34971 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0811 23:13:47.221743   34971 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0811 23:13:47.221796   34971 kubeadm.go:322] OS: Linux
	I0811 23:13:47.221873   34971 kubeadm.go:322] CGROUPS_CPU: enabled
	I0811 23:13:47.221944   34971 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0811 23:13:47.222014   34971 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0811 23:13:47.222090   34971 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0811 23:13:47.222163   34971 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0811 23:13:47.222242   34971 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0811 23:13:47.314832   34971 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 23:13:47.314996   34971 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 23:13:47.315134   34971 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 23:13:47.549593   34971 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:13:47.550942   34971 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:13:47.551231   34971 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0811 23:13:47.653497   34971 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:13:47.657384   34971 out.go:204]   - Generating certificates and keys ...
	I0811 23:13:47.657576   34971 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0811 23:13:47.657688   34971 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0811 23:13:48.303599   34971 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 23:13:48.699791   34971 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0811 23:13:49.165538   34971 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0811 23:13:49.542400   34971 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0811 23:13:49.726894   34971 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0811 23:13:49.727257   34971 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-200414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 23:13:50.178816   34971 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0811 23:13:50.178991   34971 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-200414 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0811 23:13:50.339088   34971 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 23:13:50.621748   34971 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 23:13:51.047017   34971 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0811 23:13:51.047445   34971 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:13:51.238264   34971 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:13:51.446507   34971 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:13:52.076967   34971 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:13:52.891645   34971 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:13:52.892682   34971 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:13:52.894859   34971 out.go:204]   - Booting up control plane ...
	I0811 23:13:52.894963   34971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:13:52.902342   34971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:13:52.904362   34971 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:13:52.905656   34971 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:13:52.908560   34971 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 23:14:04.911223   34971 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002558 seconds
	I0811 23:14:04.911336   34971 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 23:14:04.925569   34971 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 23:14:05.447716   34971 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0811 23:14:05.447868   34971 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-200414 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0811 23:14:05.959220   34971 kubeadm.go:322] [bootstrap-token] Using token: anca9l.gvkfeiznp3k6z58l
	I0811 23:14:05.961308   34971 out.go:204]   - Configuring RBAC rules ...
	I0811 23:14:05.961431   34971 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 23:14:05.968836   34971 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 23:14:05.980181   34971 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 23:14:05.991416   34971 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 23:14:06.003834   34971 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 23:14:06.007077   34971 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 23:14:06.032015   34971 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 23:14:06.387927   34971 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0811 23:14:06.511810   34971 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0811 23:14:06.513487   34971 kubeadm.go:322] 
	I0811 23:14:06.513555   34971 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0811 23:14:06.513561   34971 kubeadm.go:322] 
	I0811 23:14:06.513639   34971 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0811 23:14:06.513645   34971 kubeadm.go:322] 
	I0811 23:14:06.513676   34971 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0811 23:14:06.513731   34971 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 23:14:06.513779   34971 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 23:14:06.513783   34971 kubeadm.go:322] 
	I0811 23:14:06.513831   34971 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0811 23:14:06.513900   34971 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 23:14:06.513971   34971 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 23:14:06.513984   34971 kubeadm.go:322] 
	I0811 23:14:06.514062   34971 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0811 23:14:06.514133   34971 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0811 23:14:06.514137   34971 kubeadm.go:322] 
	I0811 23:14:06.514214   34971 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token anca9l.gvkfeiznp3k6z58l \
	I0811 23:14:06.514312   34971 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 \
	I0811 23:14:06.514336   34971 kubeadm.go:322]     --control-plane 
	I0811 23:14:06.514340   34971 kubeadm.go:322] 
	I0811 23:14:06.514418   34971 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0811 23:14:06.514422   34971 kubeadm.go:322] 
	I0811 23:14:06.514498   34971 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token anca9l.gvkfeiznp3k6z58l \
	I0811 23:14:06.514614   34971 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 
	I0811 23:14:06.517007   34971 kubeadm.go:322] W0811 23:13:47.165741    1232 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0811 23:14:06.517231   34971 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0811 23:14:06.517334   34971 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:14:06.517454   34971 kubeadm.go:322] W0811 23:13:52.902624    1232 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0811 23:14:06.517570   34971 kubeadm.go:322] W0811 23:13:52.904280    1232 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0811 23:14:06.517584   34971 cni.go:84] Creating CNI manager for ""
	I0811 23:14:06.517591   34971 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:14:06.520438   34971 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:14:06.522394   34971 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:14:06.531325   34971 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0811 23:14:06.531348   34971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:14:06.564828   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:14:07.028912   34971 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:14:07.028990   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:07.029043   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5 minikube.k8s.io/name=ingress-addon-legacy-200414 minikube.k8s.io/updated_at=2023_08_11T23_14_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:07.173171   34971 ops.go:34] apiserver oom_adj: -16
	I0811 23:14:07.173273   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:07.277438   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:07.875907   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:08.375545   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:08.876141   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:09.376094   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:09.875782   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:10.375933   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:10.875368   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:11.375973   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:11.876350   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:12.376287   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:12.875930   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:13.376077   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:13.875690   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:14.375533   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:14.875472   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:15.376037   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:15.875846   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:16.376020   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:16.875984   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:17.375420   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:17.875904   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:18.376292   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:18.876045   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:19.376231   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:19.876109   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:20.376049   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:20.875401   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:21.375306   34971 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:14:21.476269   34971 kubeadm.go:1081] duration metric: took 14.447345197s to wait for elevateKubeSystemPrivileges.
	I0811 23:14:21.476293   34971 kubeadm.go:406] StartCluster complete in 34.450169132s
	I0811 23:14:21.476308   34971 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:14:21.476363   34971 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:14:21.477045   34971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:14:21.477862   34971 kapi.go:59] client config for ingress-addon-legacy-200414: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:14:21.479202   34971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:14:21.479569   34971 config.go:182] Loaded profile config "ingress-addon-legacy-200414": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0811 23:14:21.479611   34971 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:14:21.479685   34971 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-200414"
	I0811 23:14:21.479701   34971 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-200414"
	I0811 23:14:21.479733   34971 host.go:66] Checking if "ingress-addon-legacy-200414" exists ...
	I0811 23:14:21.479743   34971 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 23:14:21.479776   34971 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-200414"
	I0811 23:14:21.479789   34971 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-200414"
	I0811 23:14:21.480058   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:14:21.480144   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:14:21.511458   34971 kapi.go:59] client config for ingress-addon-legacy-200414: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:14:21.534434   34971 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:14:21.536602   34971 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:14:21.536621   34971 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 23:14:21.536685   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:14:21.535985   34971 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-200414"
	I0811 23:14:21.542053   34971 host.go:66] Checking if "ingress-addon-legacy-200414" exists ...
	I0811 23:14:21.549410   34971 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200414 --format={{.State.Status}}
	I0811 23:14:21.567054   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:14:21.589804   34971 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 23:14:21.589823   34971 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 23:14:21.589882   34971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200414
	I0811 23:14:21.620189   34971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/ingress-addon-legacy-200414/id_rsa Username:docker}
	I0811 23:14:21.726013   34971 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 23:14:21.734963   34971 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-200414" context rescaled to 1 replicas
	I0811 23:14:21.735006   34971 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:14:21.737146   34971 out.go:177] * Verifying Kubernetes components...
	I0811 23:14:21.739296   34971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:14:21.817451   34971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:14:21.856840   34971 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 23:14:22.132884   34971 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0811 23:14:22.133658   34971 kapi.go:59] client config for ingress-addon-legacy-200414: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:14:22.133977   34971 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-200414" to be "Ready" ...
	I0811 23:14:22.306578   34971 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0811 23:14:22.308854   34971 addons.go:502] enable addons completed in 829.231894ms: enabled=[default-storageclass storage-provisioner]
	I0811 23:14:24.144830   34971 node_ready.go:58] node "ingress-addon-legacy-200414" has status "Ready":"False"
	I0811 23:14:26.644233   34971 node_ready.go:58] node "ingress-addon-legacy-200414" has status "Ready":"False"
	I0811 23:14:29.144216   34971 node_ready.go:58] node "ingress-addon-legacy-200414" has status "Ready":"False"
	I0811 23:14:30.144436   34971 node_ready.go:49] node "ingress-addon-legacy-200414" has status "Ready":"True"
	I0811 23:14:30.144466   34971 node_ready.go:38] duration metric: took 8.010453489s waiting for node "ingress-addon-legacy-200414" to be "Ready" ...
	I0811 23:14:30.144477   34971 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:14:30.151975   34971 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:32.160669   34971 pod_ready.go:102] pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-11 23:14:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0811 23:14:34.164040   34971 pod_ready.go:102] pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace has status "Ready":"False"
	I0811 23:14:36.663475   34971 pod_ready.go:102] pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace has status "Ready":"False"
	I0811 23:14:39.162680   34971 pod_ready.go:102] pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace has status "Ready":"False"
	I0811 23:14:40.663064   34971 pod_ready.go:92] pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:40.663090   34971 pod_ready.go:81] duration metric: took 10.511078741s waiting for pod "coredns-66bff467f8-vvd9v" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.663101   34971 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.667229   34971 pod_ready.go:92] pod "etcd-ingress-addon-legacy-200414" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:40.667253   34971 pod_ready.go:81] duration metric: took 4.144347ms waiting for pod "etcd-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.667266   34971 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.671425   34971 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-200414" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:40.671447   34971 pod_ready.go:81] duration metric: took 4.173754ms waiting for pod "kube-apiserver-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.671458   34971 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.676026   34971 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-200414" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:40.676049   34971 pod_ready.go:81] duration metric: took 4.558464ms waiting for pod "kube-controller-manager-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.676060   34971 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qnqw2" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.680810   34971 pod_ready.go:92] pod "kube-proxy-qnqw2" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:40.680835   34971 pod_ready.go:81] duration metric: took 4.76769ms waiting for pod "kube-proxy-qnqw2" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.680877   34971 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:40.858273   34971 request.go:628] Waited for 177.329867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-200414
	I0811 23:14:41.058298   34971 request.go:628] Waited for 197.350513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-200414
	I0811 23:14:41.061143   34971 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-200414" in "kube-system" namespace has status "Ready":"True"
	I0811 23:14:41.061167   34971 pod_ready.go:81] duration metric: took 380.274197ms waiting for pod "kube-scheduler-ingress-addon-legacy-200414" in "kube-system" namespace to be "Ready" ...
	I0811 23:14:41.061180   34971 pod_ready.go:38] duration metric: took 10.916685017s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:14:41.061194   34971 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:14:41.061255   34971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:14:41.074140   34971 api_server.go:72] duration metric: took 19.339102691s to wait for apiserver process to appear ...
	I0811 23:14:41.074164   34971 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:14:41.074180   34971 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0811 23:14:41.083230   34971 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0811 23:14:41.084041   34971 api_server.go:141] control plane version: v1.18.20
	I0811 23:14:41.084061   34971 api_server.go:131] duration metric: took 9.891339ms to wait for apiserver health ...
	I0811 23:14:41.084070   34971 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:14:41.258462   34971 request.go:628] Waited for 174.329322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:14:41.264541   34971 system_pods.go:59] 8 kube-system pods found
	I0811 23:14:41.264579   34971 system_pods.go:61] "coredns-66bff467f8-vvd9v" [878b65e1-a01f-4203-9157-ca60e3b65844] Running
	I0811 23:14:41.264587   34971 system_pods.go:61] "etcd-ingress-addon-legacy-200414" [cd2a42af-65ad-413a-9605-063da9042472] Running
	I0811 23:14:41.264592   34971 system_pods.go:61] "kindnet-bnn2b" [2d7fb4f8-4825-48e8-8455-b33bcecd8b20] Running
	I0811 23:14:41.264597   34971 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-200414" [42dd0478-2ffd-4295-874a-91f36cd8c530] Running
	I0811 23:14:41.264602   34971 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-200414" [ffe261cb-1f23-406a-9acb-87a5655126e1] Running
	I0811 23:14:41.264606   34971 system_pods.go:61] "kube-proxy-qnqw2" [e66a4583-2dc8-46e2-bfb1-dff901e31afb] Running
	I0811 23:14:41.264611   34971 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-200414" [a4573776-de99-48d0-9428-321c39d621f9] Running
	I0811 23:14:41.264616   34971 system_pods.go:61] "storage-provisioner" [334316b0-82d8-4b6c-925f-eac809828359] Running
	I0811 23:14:41.264622   34971 system_pods.go:74] duration metric: took 180.547885ms to wait for pod list to return data ...
	I0811 23:14:41.264631   34971 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:14:41.457960   34971 request.go:628] Waited for 193.240612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:14:41.460291   34971 default_sa.go:45] found service account: "default"
	I0811 23:14:41.460315   34971 default_sa.go:55] duration metric: took 195.674316ms for default service account to be created ...
	I0811 23:14:41.460325   34971 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:14:41.658578   34971 request.go:628] Waited for 198.177953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:14:41.664419   34971 system_pods.go:86] 8 kube-system pods found
	I0811 23:14:41.664448   34971 system_pods.go:89] "coredns-66bff467f8-vvd9v" [878b65e1-a01f-4203-9157-ca60e3b65844] Running
	I0811 23:14:41.664455   34971 system_pods.go:89] "etcd-ingress-addon-legacy-200414" [cd2a42af-65ad-413a-9605-063da9042472] Running
	I0811 23:14:41.664460   34971 system_pods.go:89] "kindnet-bnn2b" [2d7fb4f8-4825-48e8-8455-b33bcecd8b20] Running
	I0811 23:14:41.664466   34971 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-200414" [42dd0478-2ffd-4295-874a-91f36cd8c530] Running
	I0811 23:14:41.664539   34971 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-200414" [ffe261cb-1f23-406a-9acb-87a5655126e1] Running
	I0811 23:14:41.664552   34971 system_pods.go:89] "kube-proxy-qnqw2" [e66a4583-2dc8-46e2-bfb1-dff901e31afb] Running
	I0811 23:14:41.664557   34971 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-200414" [a4573776-de99-48d0-9428-321c39d621f9] Running
	I0811 23:14:41.664562   34971 system_pods.go:89] "storage-provisioner" [334316b0-82d8-4b6c-925f-eac809828359] Running
	I0811 23:14:41.664568   34971 system_pods.go:126] duration metric: took 204.238984ms to wait for k8s-apps to be running ...
	I0811 23:14:41.664585   34971 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:14:41.664661   34971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:14:41.678011   34971 system_svc.go:56] duration metric: took 13.420334ms WaitForService to wait for kubelet.
	I0811 23:14:41.678036   34971 kubeadm.go:581] duration metric: took 19.943004223s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:14:41.678062   34971 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:14:41.858512   34971 request.go:628] Waited for 180.385002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0811 23:14:41.861420   34971 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:14:41.861451   34971 node_conditions.go:123] node cpu capacity is 2
	I0811 23:14:41.861463   34971 node_conditions.go:105] duration metric: took 183.395904ms to run NodePressure ...
	I0811 23:14:41.861501   34971 start.go:228] waiting for startup goroutines ...
	I0811 23:14:41.861516   34971 start.go:233] waiting for cluster config update ...
	I0811 23:14:41.861527   34971 start.go:242] writing updated cluster config ...
	I0811 23:14:41.861836   34971 ssh_runner.go:195] Run: rm -f paused
	I0811 23:14:41.923288   34971 start.go:599] kubectl: 1.27.4, cluster: 1.18.20 (minor skew: 9)
	I0811 23:14:41.925417   34971 out.go:177] 
	W0811 23:14:41.927395   34971 out.go:239] ! /usr/local/bin/kubectl is version 1.27.4, which may have incompatibilities with Kubernetes 1.18.20.
	I0811 23:14:41.929308   34971 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0811 23:14:41.931282   34971 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-200414" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.314337882Z" level=info msg="Stopped container 744d495a298b6589a820fe6e79c494bbf5803e2d1281bccfa67672088ab54a20: ingress-nginx/ingress-nginx-controller-7fcf777cb7-vvvpb/controller" id=e6e57674-9176-4617-84b5-0a5006f76f39 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.315033883Z" level=info msg="Stopping pod sandbox: 1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6" id=f2557449-847d-4607-8421-6018d35e45a6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.315934112Z" level=info msg="Stopped container 744d495a298b6589a820fe6e79c494bbf5803e2d1281bccfa67672088ab54a20: ingress-nginx/ingress-nginx-controller-7fcf777cb7-vvvpb/controller" id=0a496405-8441-4b9f-9097-3480fc8cb134 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.316300123Z" level=info msg="Stopping pod sandbox: 1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6" id=ddd3552d-27ba-4fa0-b959-b23ee0823697 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.318508536Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-U3VWPXVY6BH3KWE6 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NHSVW6OK7DNVTGWW - [0:0]\n-X KUBE-HP-NHSVW6OK7DNVTGWW\n-X KUBE-HP-U3VWPXVY6BH3KWE6\nCOMMIT\n"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.320061015Z" level=info msg="Closing host port tcp:80"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.320110681Z" level=info msg="Closing host port tcp:443"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.321294106Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.321322118Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.321466760Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-vvvpb Namespace:ingress-nginx ID:1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6 UID:4da12362-2965-4658-a458-fc368c6b717e NetNS:/var/run/netns/777cea41-594f-448d-942c-34b00af8ef00 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.321610811Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-vvvpb from CNI network \"kindnet\" (type=ptp)"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.354655372Z" level=info msg="Stopped pod sandbox: 1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6" id=f2557449-847d-4607-8421-6018d35e45a6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.355876787Z" level=info msg="Stopped pod sandbox (already stopped): 1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6" id=ddd3552d-27ba-4fa0-b959-b23ee0823697 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.741466973Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=35619688-97f9-421b-9130-ed32307dcadf name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.741667993Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=35619688-97f9-421b-9130-ed32307dcadf name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.742745792Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=eb5c63e3-adff-4700-8cf0-6e4278d03575 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.742933265Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=eb5c63e3-adff-4700-8cf0-6e4278d03575 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.744472098Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-hw8nq/hello-world-app" id=f20204e2-6ceb-4781-9b2e-61344b0895df name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.744562733Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.836241742Z" level=info msg="Created container 17187ddfa02ba4ecf16252a6a367223a5d3b9e9a03eaa4a4cc73deb7c93417b5: default/hello-world-app-5f5d8b66bb-hw8nq/hello-world-app" id=f20204e2-6ceb-4781-9b2e-61344b0895df name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.837285505Z" level=info msg="Starting container: 17187ddfa02ba4ecf16252a6a367223a5d3b9e9a03eaa4a4cc73deb7c93417b5" id=5c9066ef-1e54-43e6-9730-74c862ad3ace name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 11 23:17:51 ingress-addon-legacy-200414 conmon[3653]: conmon 17187ddfa02ba4ecf162 <ninfo>: container 3664 exited with status 1
	Aug 11 23:17:51 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:51.854095416Z" level=info msg="Started container" PID=3664 containerID=17187ddfa02ba4ecf16252a6a367223a5d3b9e9a03eaa4a4cc73deb7c93417b5 description=default/hello-world-app-5f5d8b66bb-hw8nq/hello-world-app id=5c9066ef-1e54-43e6-9730-74c862ad3ace name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=23a0d78e52c851bbf22fdc5ebdea57bc91f7a2d072d7c46162800ffc806c54ad
	Aug 11 23:17:52 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:52.169368809Z" level=info msg="Removing container: fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77" id=c26eaf31-8ea9-4ac2-8f5b-22abdf724481 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 11 23:17:52 ingress-addon-legacy-200414 crio[899]: time="2023-08-11 23:17:52.199456811Z" level=info msg="Removed container fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77: default/hello-world-app-5f5d8b66bb-hw8nq/hello-world-app" id=c26eaf31-8ea9-4ac2-8f5b-22abdf724481 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17187ddfa02ba       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   5 seconds ago       Exited              hello-world-app           2                   23a0d78e52c85       hello-world-app-5f5d8b66bb-hw8nq
	724906cac58a4       docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c                    2 minutes ago       Running             nginx                     0                   1f91d155aa155       nginx
	744d495a298b6       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   1f7ebf1e8f06a       ingress-nginx-controller-7fcf777cb7-vvvpb
	70a6ff17748de       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   2e8a3da8a020c       ingress-nginx-admission-patch-nm4fb
	8859cc170b2ac       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   48061e9eba031       ingress-nginx-admission-create-jxg84
	7f56e9c20f05f       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   a92b5a7c07533       storage-provisioner
	eeb961fa75d94       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   85bde2d8471ac       coredns-66bff467f8-vvd9v
	3baab6bc8cf2c       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   c7fd85db0b5d6       kindnet-bnn2b
	c3de002a3ec56       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   fc5f96a95a85f       kube-proxy-qnqw2
	a67d2b0a2c810       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   e74b8385f65bc       kube-controller-manager-ingress-addon-legacy-200414
	06105ebd9e3fa       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   9134755de1003       kube-scheduler-ingress-addon-legacy-200414
	bb4d8bbb91a52       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   c6dbdbc79e85c       etcd-ingress-addon-legacy-200414
	37228f154c0c2       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   f99b77ff702b6       kube-apiserver-ingress-addon-legacy-200414
	
	* 
	* ==> coredns [eeb961fa75d9401461332b09c3110919e3e788ab2480791e6e1edbc524be8cf9] <==
	* [INFO] 10.244.0.5:60539 - 34264 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051956s
	[INFO] 10.244.0.5:44673 - 64120 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006447253s
	[INFO] 10.244.0.5:60539 - 42666 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006112332s
	[INFO] 10.244.0.5:60539 - 24217 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002928571s
	[INFO] 10.244.0.5:44673 - 10317 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003529472s
	[INFO] 10.244.0.5:44673 - 4236 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000118122s
	[INFO] 10.244.0.5:60539 - 11339 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077146s
	[INFO] 10.244.0.5:57249 - 31697 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093408s
	[INFO] 10.244.0.5:57249 - 5212 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035085s
	[INFO] 10.244.0.5:40084 - 7888 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000025477s
	[INFO] 10.244.0.5:40084 - 58054 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029547s
	[INFO] 10.244.0.5:57249 - 48476 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024402s
	[INFO] 10.244.0.5:57249 - 19001 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000086352s
	[INFO] 10.244.0.5:57249 - 805 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048993s
	[INFO] 10.244.0.5:40084 - 23913 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035438s
	[INFO] 10.244.0.5:57249 - 7277 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033568s
	[INFO] 10.244.0.5:57249 - 40686 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001181906s
	[INFO] 10.244.0.5:40084 - 18173 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031926s
	[INFO] 10.244.0.5:40084 - 2021 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000189368s
	[INFO] 10.244.0.5:40084 - 2879 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006455s
	[INFO] 10.244.0.5:57249 - 12286 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001057358s
	[INFO] 10.244.0.5:57249 - 54045 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035217s
	[INFO] 10.244.0.5:40084 - 10775 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000951077s
	[INFO] 10.244.0.5:40084 - 44026 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000850628s
	[INFO] 10.244.0.5:40084 - 62385 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056871s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-200414
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-200414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=ingress-addon-legacy-200414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_14_07_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:14:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-200414
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:17:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:17:39 +0000   Fri, 11 Aug 2023 23:13:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:17:39 +0000   Fri, 11 Aug 2023 23:13:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:17:39 +0000   Fri, 11 Aug 2023 23:13:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:17:39 +0000   Fri, 11 Aug 2023 23:14:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-200414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 4aa9a1327ef145cfa517955ffcc3f2d0
	  System UUID:                9b4b12b5-79ba-4f09-8fb8-49b2c6d263f8
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-hw8nq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-vvd9v                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m36s
	  kube-system                 etcd-ingress-addon-legacy-200414                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-bnn2b                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m36s
	  kube-system                 kube-apiserver-ingress-addon-legacy-200414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-200414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-qnqw2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-scheduler-ingress-addon-legacy-200414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s (x3 over 4m2s)  kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x3 over 4m2s)  kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s                kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s                kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s                kubelet     Node ingress-addon-legacy-200414 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m34s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m28s                kubelet     Node ingress-addon-legacy-200414 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000754] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001054] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +0.003010] FS-Cache: Duplicate cookie detected
	[  +0.000685] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000004a8382c
	[  +0.001063] FS-Cache: O-key=[8] '805b3b0000000000'
	[  +0.000759] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=0000000045141c8c
	[  +0.001051] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +2.763262] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=000000003c07f4d4
	[  +0.001127] FS-Cache: O-key=[8] '7f5b3b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=000000006a6921aa
	[  +0.001095] FS-Cache: N-key=[8] '7f5b3b0000000000'
	[  +0.384460] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000084bb64d5
	[  +0.001049] FS-Cache: O-key=[8] '8a5b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001049] FS-Cache: N-key=[8] '8a5b3b0000000000'
	[Aug11 23:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [bb4d8bbb91a52643d54f5e052dbfcd7c762b9110fa48ab76deb5ccf6fd9cd907] <==
	* raft2023/08/11 23:13:58 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/11 23:13:58 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/11 23:13:58 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/11 23:13:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-11 23:13:58.504530 W | auth: simple token is not cryptographically signed
	2023-08-11 23:13:58.507370 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-11 23:13:58.511859 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-11 23:13:58.512127 I | embed: listening for peers on 192.168.49.2:2380
	2023-08-11 23:13:58.512308 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-11 23:13:58.512606 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/11 23:13:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-11 23:13:58.512907 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/08/11 23:13:59 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/11 23:13:59 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/11 23:13:59 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/11 23:13:59 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/11 23:13:59 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-11 23:13:59.281851 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-11 23:13:59.282080 I | etcdserver: published {Name:ingress-addon-legacy-200414 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-11 23:13:59.282142 I | embed: ready to serve client requests
	2023-08-11 23:13:59.283795 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-11 23:13:59.283964 I | embed: ready to serve client requests
	2023-08-11 23:13:59.285310 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-11 23:13:59.302062 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-11 23:13:59.302190 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:17:57 up  1:00,  0 users,  load average: 0.41, 1.14, 0.92
	Linux ingress-addon-legacy-200414 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [3baab6bc8cf2c206fe331dc02f37038e80340b324b46ca971a5331bc10c92685] <==
	* I0811 23:15:56.105485       1 main.go:227] handling current node
	I0811 23:16:06.116563       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:06.116596       1 main.go:227] handling current node
	I0811 23:16:16.127593       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:16.127621       1 main.go:227] handling current node
	I0811 23:16:26.131190       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:26.131224       1 main.go:227] handling current node
	I0811 23:16:36.141906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:36.141934       1 main.go:227] handling current node
	I0811 23:16:46.152179       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:46.152202       1 main.go:227] handling current node
	I0811 23:16:56.157775       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:16:56.157803       1 main.go:227] handling current node
	I0811 23:17:06.165009       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:06.165038       1 main.go:227] handling current node
	I0811 23:17:16.168745       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:16.168776       1 main.go:227] handling current node
	I0811 23:17:26.172001       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:26.172031       1 main.go:227] handling current node
	I0811 23:17:36.182737       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:36.182768       1 main.go:227] handling current node
	I0811 23:17:46.185835       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:46.185866       1 main.go:227] handling current node
	I0811 23:17:56.196373       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0811 23:17:56.196401       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [37228f154c0c2b19662cd270efb745b89abc62d5d3850fa58795ed337c50ae24] <==
	* E0811 23:14:03.530859       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0811 23:14:03.629321       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0811 23:14:03.647716       1 cache.go:39] Caches are synced for autoregister controller
	I0811 23:14:03.648297       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0811 23:14:03.648377       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 23:14:03.648419       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0811 23:14:04.437878       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0811 23:14:04.438028       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 23:14:04.447701       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0811 23:14:04.453878       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0811 23:14:04.453902       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0811 23:14:04.825382       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 23:14:04.870674       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0811 23:14:04.966508       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0811 23:14:04.967501       1 controller.go:609] quota admission added evaluator for: endpoints
	I0811 23:14:04.971499       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0811 23:14:05.858194       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0811 23:14:06.308864       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0811 23:14:06.430910       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0811 23:14:09.681391       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 23:14:21.618829       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0811 23:14:21.827038       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0811 23:14:42.781941       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0811 23:15:10.621845       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0811 23:17:49.130143       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [a67d2b0a2c81087ef5178a7c79e2f3c23692da1616b6c7afe75432aae7a9e1c4] <==
	* reemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000fd6ea0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001ad2378)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0811 23:14:21.806117       1 shared_informer.go:230] Caches are synced for resource quota 
	I0811 23:14:21.843401       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0811 23:14:21.845739       1 shared_informer.go:230] Caches are synced for expand 
	I0811 23:14:21.861752       1 shared_informer.go:230] Caches are synced for PV protection 
	I0811 23:14:21.865576       1 shared_informer.go:230] Caches are synced for attach detach 
	I0811 23:14:21.866270       1 shared_informer.go:230] Caches are synced for disruption 
	I0811 23:14:21.866286       1 disruption.go:339] Sending events to api server.
	I0811 23:14:21.866342       1 shared_informer.go:230] Caches are synced for resource quota 
	I0811 23:14:21.913325       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0811 23:14:21.913353       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0811 23:14:21.942072       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"dce4a85a-02e6-4b3a-b0ec-62fe653d5e23", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0811 23:14:22.002110       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"9e259dd0-cf7b-4363-ac0d-253e82ada480", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-vvd9v
	I0811 23:14:22.197897       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0811 23:14:22.197954       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0811 23:14:31.523858       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0811 23:14:42.758634       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"432b20ef-ed4a-41f0-9ccc-41bd07b95763", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0811 23:14:42.776112       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3feddf22-a978-4e9b-b38b-6b6d201f5801", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-vvvpb
	I0811 23:14:42.814496       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b173c799-56d3-41ae-9087-df91b935d5cf", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-jxg84
	I0811 23:14:42.870821       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"22b524cb-7941-48bd-b365-8d23a74180b0", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-nm4fb
	I0811 23:14:45.821433       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b173c799-56d3-41ae-9087-df91b935d5cf", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0811 23:14:45.841275       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"22b524cb-7941-48bd-b365-8d23a74180b0", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0811 23:17:31.651521       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f4d6fd8e-af86-47e7-ac62-d1f70799bcda", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0811 23:17:31.662060       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"dfc269f7-3e13-47eb-ac08-124fda5f4716", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-hw8nq
	E0811 23:17:53.867986       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-vh6gk" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [c3de002a3ec569e62eb6341df79380877192307645de19a04ba87c3f63d9744d] <==
	* W0811 23:14:23.786407       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0811 23:14:23.805264       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0811 23:14:23.805378       1 server_others.go:186] Using iptables Proxier.
	I0811 23:14:23.805731       1 server.go:583] Version: v1.18.20
	I0811 23:14:23.806772       1 config.go:133] Starting endpoints config controller
	I0811 23:14:23.806817       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0811 23:14:23.806985       1 config.go:315] Starting service config controller
	I0811 23:14:23.806997       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0811 23:14:23.907389       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0811 23:14:23.907506       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [06105ebd9e3fa12161388034d122b04bcf98be273e99f37391f22fa2b7edc7bf] <==
	* W0811 23:14:03.573541       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0811 23:14:03.573552       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0811 23:14:03.636123       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0811 23:14:03.636226       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0811 23:14:03.638219       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0811 23:14:03.638442       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:14:03.638486       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:14:03.638533       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0811 23:14:03.649469       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 23:14:03.649653       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 23:14:03.649786       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0811 23:14:03.649910       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 23:14:03.650014       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 23:14:03.650140       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 23:14:03.650272       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0811 23:14:03.650383       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 23:14:03.650477       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 23:14:03.650583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0811 23:14:03.650686       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 23:14:03.653256       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 23:14:04.471829       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0811 23:14:04.662620       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0811 23:14:04.694378       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 23:14:04.697574       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0811 23:14:07.638634       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Aug 11 23:17:36 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:36.141988    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b36831d1a99af0984c89c660bfd51004e3dcbc3cc377b9ea58ec519467d28388
	Aug 11 23:17:36 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:36.142256    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77
	Aug 11 23:17:36 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:36.142531    1610 pod_workers.go:191] Error syncing pod cea1e5be-bf08-49d5-b8bc-b43da49e9a22 ("hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"
	Aug 11 23:17:37 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:37.144871    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77
	Aug 11 23:17:37 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:37.145153    1610 pod_workers.go:191] Error syncing pod cea1e5be-bf08-49d5-b8bc-b43da49e9a22 ("hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"
	Aug 11 23:17:39 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:39.742026    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 11 23:17:39 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:39.742059    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 11 23:17:39 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:39.742100    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 11 23:17:39 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:39.742133    1610 pod_workers.go:191] Error syncing pod acc2aab5-016f-45ad-9295-cdcc428cb89f ("kube-ingress-dns-minikube_kube-system(acc2aab5-016f-45ad-9295-cdcc428cb89f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 11 23:17:47 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:47.613842    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-lf8gn" (UniqueName: "kubernetes.io/secret/acc2aab5-016f-45ad-9295-cdcc428cb89f-minikube-ingress-dns-token-lf8gn") pod "acc2aab5-016f-45ad-9295-cdcc428cb89f" (UID: "acc2aab5-016f-45ad-9295-cdcc428cb89f")
	Aug 11 23:17:47 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:47.618263    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc2aab5-016f-45ad-9295-cdcc428cb89f-minikube-ingress-dns-token-lf8gn" (OuterVolumeSpecName: "minikube-ingress-dns-token-lf8gn") pod "acc2aab5-016f-45ad-9295-cdcc428cb89f" (UID: "acc2aab5-016f-45ad-9295-cdcc428cb89f"). InnerVolumeSpecName "minikube-ingress-dns-token-lf8gn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 11 23:17:47 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:47.714229    1610 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-lf8gn" (UniqueName: "kubernetes.io/secret/acc2aab5-016f-45ad-9295-cdcc428cb89f-minikube-ingress-dns-token-lf8gn") on node "ingress-addon-legacy-200414" DevicePath ""
	Aug 11 23:17:49 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:49.114838    1610 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vvvpb.177a776122f281d1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vvvpb", UID:"4da12362-2965-4658-a458-fc368c6b717e", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-200414"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12dce4746ba9fd1, ext:222900283115, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12dce4746ba9fd1, ext:222900283115, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vvvpb.177a776122f281d1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 11 23:17:49 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:49.130171    1610 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vvvpb.177a776122f281d1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vvvpb", UID:"4da12362-2965-4658-a458-fc368c6b717e", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-200414"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12dce4746ba9fd1, ext:222900283115, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12dce47473a8fe5, ext:222908667648, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vvvpb.177a776122f281d1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.623592    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-p7fmv" (UniqueName: "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-ingress-nginx-token-p7fmv") pod "4da12362-2965-4658-a458-fc368c6b717e" (UID: "4da12362-2965-4658-a458-fc368c6b717e")
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.623647    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-webhook-cert") pod "4da12362-2965-4658-a458-fc368c6b717e" (UID: "4da12362-2965-4658-a458-fc368c6b717e")
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.629848    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4da12362-2965-4658-a458-fc368c6b717e" (UID: "4da12362-2965-4658-a458-fc368c6b717e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.631410    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-ingress-nginx-token-p7fmv" (OuterVolumeSpecName: "ingress-nginx-token-p7fmv") pod "4da12362-2965-4658-a458-fc368c6b717e" (UID: "4da12362-2965-4658-a458-fc368c6b717e"). InnerVolumeSpecName "ingress-nginx-token-p7fmv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.723971    1610 reconciler.go:319] Volume detached for volume "ingress-nginx-token-p7fmv" (UniqueName: "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-ingress-nginx-token-p7fmv") on node "ingress-addon-legacy-200414" DevicePath ""
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.724022    1610 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4da12362-2965-4658-a458-fc368c6b717e-webhook-cert") on node "ingress-addon-legacy-200414" DevicePath ""
	Aug 11 23:17:51 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:51.740903    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77
	Aug 11 23:17:52 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:52.167276    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fc560e9955269c6ba3affd4d5bbb3c70b4a23d13db77981c78f64b252e07ce77
	Aug 11 23:17:52 ingress-addon-legacy-200414 kubelet[1610]: I0811 23:17:52.167546    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 17187ddfa02ba4ecf16252a6a367223a5d3b9e9a03eaa4a4cc73deb7c93417b5
	Aug 11 23:17:52 ingress-addon-legacy-200414 kubelet[1610]: E0811 23:17:52.167807    1610 pod_workers.go:191] Error syncing pod cea1e5be-bf08-49d5-b8bc-b43da49e9a22 ("hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hw8nq_default(cea1e5be-bf08-49d5-b8bc-b43da49e9a22)"
	Aug 11 23:17:52 ingress-addon-legacy-200414 kubelet[1610]: W0811 23:17:52.170398    1610 pod_container_deletor.go:77] Container "1f7ebf1e8f06a9abd95aa3ec7e79c8c1e9526f25b36238a6aa7abb4731491ea6" not found in pod's containers
	
	* 
	* ==> storage-provisioner [7f56e9c20f05f2b04a20f6f3fee61809ffe1a92b09e99066e20ee6d46b1e739b] <==
	* I0811 23:14:36.514122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0811 23:14:36.527990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0811 23:14:36.528074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0811 23:14:36.534861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0811 23:14:36.535576       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-200414_f2a2b61d-1cfd-4ee4-b44e-07501986092f!
	I0811 23:14:36.535429       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e0f9b-067a-45eb-851e-115b83fd7447", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-200414_f2a2b61d-1cfd-4ee4-b44e-07501986092f became leader
	I0811 23:14:36.636041       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-200414_f2a2b61d-1cfd-4ee4-b44e-07501986092f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-200414 -n ingress-addon-legacy-200414
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-200414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (234.704893ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-qc8x6): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- sh -c "ping -c 1 192.168.58.1": exit status 1 (258.122285ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-xv9cw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-891155
helpers_test.go:235: (dbg) docker inspect multinode-891155:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1",
	        "Created": "2023-08-11T23:24:20.485211474Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 71778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:24:20.813432035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:abe4482d178dd08cce0cdcb8e444349673c3edfa8e7d6462144a8d9173479eb6",
	        "ResolvConfPath": "/var/lib/docker/containers/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/hosts",
	        "LogPath": "/var/lib/docker/containers/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1-json.log",
	        "Name": "/multinode-891155",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-891155:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-891155",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c51856b525f8e595dbf103f8138c5b3c00e75a50d059f3134e4ebe798168dfb-init/diff:/var/lib/docker/overlay2/9f8bf17bd2eed1bf502486fc30f9be0589884e58aed50b5fbf77bc48ebc9a592/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c51856b525f8e595dbf103f8138c5b3c00e75a50d059f3134e4ebe798168dfb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c51856b525f8e595dbf103f8138c5b3c00e75a50d059f3134e4ebe798168dfb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c51856b525f8e595dbf103f8138c5b3c00e75a50d059f3134e4ebe798168dfb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-891155",
	                "Source": "/var/lib/docker/volumes/multinode-891155/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-891155",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-891155",
	                "name.minikube.sigs.k8s.io": "multinode-891155",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80b94db6fb8190d5932a98ff72af9246e51d63eed9fd7445df01ca559e5b78e8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80b94db6fb81",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-891155": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91ef6749902a",
	                        "multinode-891155"
	                    ],
	                    "NetworkID": "c2f4372f433ab737ba49a276c04acbe79ec95a02a15654c629962d026a9101f7",
	                    "EndpointID": "a0c87d50bcccfba36d3e009b132fb455d8ee9db44917a344da211dfe21c7e9c3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-891155 -n multinode-891155
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-891155 logs -n 25: (1.663603151s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-258542                           | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:23 UTC | 11 Aug 23 23:24 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-258542 ssh -- ls                    | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-256688                           | mount-start-1-256688 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-258542 ssh -- ls                    | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-258542                           | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	| start   | -p mount-start-2-258542                           | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	| ssh     | mount-start-2-258542 ssh -- ls                    | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-258542                           | mount-start-2-258542 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	| delete  | -p mount-start-1-256688                           | mount-start-1-256688 | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:24 UTC |
	| start   | -p multinode-891155                               | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:24 UTC | 11 Aug 23 23:25 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- apply -f                   | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- rollout                    | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- get pods -o                | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- get pods -o                | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-qc8x6 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-xv9cw --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-qc8x6 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-xv9cw --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-qc8x6 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-xv9cw -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- get pods -o                | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:25 UTC |
	|         | busybox-67b7f59bb-qc8x6                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC |                     |
	|         | busybox-67b7f59bb-qc8x6 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:25 UTC | 11 Aug 23 23:26 UTC |
	|         | busybox-67b7f59bb-xv9cw                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-891155 -- exec                       | multinode-891155     | jenkins | v1.31.1 | 11 Aug 23 23:26 UTC |                     |
	|         | busybox-67b7f59bb-xv9cw -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:24:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:24:15.277049   71330 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:24:15.277245   71330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:24:15.277256   71330 out.go:309] Setting ErrFile to fd 2...
	I0811 23:24:15.277262   71330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:24:15.277547   71330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:24:15.277972   71330 out.go:303] Setting JSON to false
	I0811 23:24:15.278936   71330 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4004,"bootTime":1691792252,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:24:15.279000   71330 start.go:138] virtualization:  
	I0811 23:24:15.281597   71330 out.go:177] * [multinode-891155] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:24:15.283144   71330 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:24:15.284818   71330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:24:15.283830   71330 notify.go:220] Checking for updates...
	I0811 23:24:15.286933   71330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:24:15.288702   71330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:24:15.290181   71330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:24:15.291545   71330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:24:15.293406   71330 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:24:15.317346   71330 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:24:15.317450   71330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:24:15.411085   71330 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-11 23:24:15.40107499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:24:15.411192   71330 docker.go:294] overlay module found
	I0811 23:24:15.414062   71330 out.go:177] * Using the docker driver based on user configuration
	I0811 23:24:15.415773   71330 start.go:298] selected driver: docker
	I0811 23:24:15.415790   71330 start.go:901] validating driver "docker" against <nil>
	I0811 23:24:15.415804   71330 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:24:15.416451   71330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:24:15.490590   71330 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-11 23:24:15.48062518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:24:15.490752   71330 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:24:15.490967   71330 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0811 23:24:15.492742   71330 out.go:177] * Using Docker driver with root privileges
	I0811 23:24:15.494161   71330 cni.go:84] Creating CNI manager for ""
	I0811 23:24:15.494176   71330 cni.go:136] 0 nodes found, recommending kindnet
	I0811 23:24:15.494188   71330 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:24:15.494201   71330 start_flags.go:319] config:
	{Name:multinode-891155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:24:15.496015   71330 out.go:177] * Starting control plane node multinode-891155 in cluster multinode-891155
	I0811 23:24:15.497815   71330 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:24:15.499635   71330 out.go:177] * Pulling base image ...
	I0811 23:24:15.501346   71330 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:24:15.501399   71330 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:24:15.501411   71330 cache.go:57] Caching tarball of preloaded images
	I0811 23:24:15.501448   71330 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:24:15.501487   71330 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:24:15.501496   71330 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:24:15.501834   71330 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json ...
	I0811 23:24:15.501858   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json: {Name:mk15725d895c7fe54736b97fefd73178cb17f297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:15.518794   71330 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:24:15.518818   71330 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:24:15.518839   71330 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:24:15.518892   71330 start.go:365] acquiring machines lock for multinode-891155: {Name:mk1a25585eb37b531f453af9b55df8c156afcaaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:24:15.519006   71330 start.go:369] acquired machines lock for "multinode-891155" in 90.02µs
	I0811 23:24:15.519034   71330 start.go:93] Provisioning new machine with config: &{Name:multinode-891155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:24:15.519124   71330 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:24:15.521152   71330 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 23:24:15.521422   71330 start.go:159] libmachine.API.Create for "multinode-891155" (driver="docker")
	I0811 23:24:15.521451   71330 client.go:168] LocalClient.Create starting
	I0811 23:24:15.521521   71330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:24:15.521556   71330 main.go:141] libmachine: Decoding PEM data...
	I0811 23:24:15.521578   71330 main.go:141] libmachine: Parsing certificate...
	I0811 23:24:15.521628   71330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:24:15.521651   71330 main.go:141] libmachine: Decoding PEM data...
	I0811 23:24:15.521665   71330 main.go:141] libmachine: Parsing certificate...
	I0811 23:24:15.522002   71330 cli_runner.go:164] Run: docker network inspect multinode-891155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:24:15.539430   71330 cli_runner.go:211] docker network inspect multinode-891155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:24:15.539527   71330 network_create.go:281] running [docker network inspect multinode-891155] to gather additional debugging logs...
	I0811 23:24:15.539548   71330 cli_runner.go:164] Run: docker network inspect multinode-891155
	W0811 23:24:15.555979   71330 cli_runner.go:211] docker network inspect multinode-891155 returned with exit code 1
	I0811 23:24:15.556012   71330 network_create.go:284] error running [docker network inspect multinode-891155]: docker network inspect multinode-891155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-891155 not found
	I0811 23:24:15.556030   71330 network_create.go:286] output of [docker network inspect multinode-891155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-891155 not found
	
	** /stderr **
	I0811 23:24:15.556084   71330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:24:15.573217   71330 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb015cdafab9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3c:25:af:38} reservation:<nil>}
	I0811 23:24:15.573537   71330 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011c1770}
	I0811 23:24:15.573557   71330 network_create.go:123] attempt to create docker network multinode-891155 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0811 23:24:15.573613   71330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-891155 multinode-891155
	I0811 23:24:15.640767   71330 network_create.go:107] docker network multinode-891155 192.168.58.0/24 created
	I0811 23:24:15.640795   71330 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-891155" container
	I0811 23:24:15.640877   71330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:24:15.657298   71330 cli_runner.go:164] Run: docker volume create multinode-891155 --label name.minikube.sigs.k8s.io=multinode-891155 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:24:15.675561   71330 oci.go:103] Successfully created a docker volume multinode-891155
	I0811 23:24:15.675643   71330 cli_runner.go:164] Run: docker run --rm --name multinode-891155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-891155 --entrypoint /usr/bin/test -v multinode-891155:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:24:16.228084   71330 oci.go:107] Successfully prepared a docker volume multinode-891155
	I0811 23:24:16.228134   71330 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:24:16.228154   71330 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:24:16.228246   71330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-891155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:24:20.395946   71330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-891155:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167660516s)
	I0811 23:24:20.395974   71330 kic.go:199] duration metric: took 4.167817 seconds to extract preloaded images to volume
	W0811 23:24:20.396113   71330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:24:20.396231   71330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:24:20.469458   71330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-891155 --name multinode-891155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-891155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-891155 --network multinode-891155 --ip 192.168.58.2 --volume multinode-891155:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:24:20.821783   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Running}}
	I0811 23:24:20.858062   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:20.883628   71330 cli_runner.go:164] Run: docker exec multinode-891155 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:24:20.943470   71330 oci.go:144] the created container "multinode-891155" has a running status.
	I0811 23:24:20.943498   71330 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa...
	I0811 23:24:21.174310   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 23:24:21.174366   71330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:24:21.211007   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:21.236231   71330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:24:21.236251   71330 kic_runner.go:114] Args: [docker exec --privileged multinode-891155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:24:21.336653   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:21.360297   71330 machine.go:88] provisioning docker machine ...
	I0811 23:24:21.360323   71330 ubuntu.go:169] provisioning hostname "multinode-891155"
	I0811 23:24:21.360383   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:21.386507   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:21.386967   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0811 23:24:21.386980   71330 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-891155 && echo "multinode-891155" | sudo tee /etc/hostname
	I0811 23:24:21.387518   71330 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43596->127.0.0.1:32847: read: connection reset by peer
	I0811 23:24:24.548563   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-891155
	
	I0811 23:24:24.548679   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:24.568275   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:24.568725   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0811 23:24:24.568749   71330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-891155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-891155/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-891155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:24:24.714284   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:24:24.714315   71330 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:24:24.714345   71330 ubuntu.go:177] setting up certificates
	I0811 23:24:24.714354   71330 provision.go:83] configureAuth start
	I0811 23:24:24.714416   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155
	I0811 23:24:24.733430   71330 provision.go:138] copyHostCerts
	I0811 23:24:24.733472   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:24:24.733509   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:24:24.733520   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:24:24.733602   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:24:24.733691   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:24:24.733712   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:24:24.733719   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:24:24.733749   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:24:24.733800   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:24:24.733820   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:24:24.733824   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:24:24.733852   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:24:24.733944   71330 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.multinode-891155 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-891155]
	I0811 23:24:25.111452   71330 provision.go:172] copyRemoteCerts
	I0811 23:24:25.111524   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:24:25.111595   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.131188   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:25.235572   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:24:25.235652   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:24:25.263399   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:24:25.263472   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0811 23:24:25.291763   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:24:25.291837   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:24:25.320608   71330 provision.go:86] duration metric: configureAuth took 606.236739ms
	I0811 23:24:25.320632   71330 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:24:25.320828   71330 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:24:25.320930   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.339523   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:24:25.339960   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0811 23:24:25.339976   71330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:24:25.601177   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:24:25.601201   71330 machine.go:91] provisioned docker machine in 4.24088824s
	I0811 23:24:25.601211   71330 client.go:171] LocalClient.Create took 10.079754479s
	I0811 23:24:25.601223   71330 start.go:167] duration metric: libmachine.API.Create for "multinode-891155" took 10.079800387s
	I0811 23:24:25.601230   71330 start.go:300] post-start starting for "multinode-891155" (driver="docker")
	I0811 23:24:25.601239   71330 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:24:25.601308   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:24:25.601353   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.620042   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:25.724562   71330 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:24:25.728790   71330 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0811 23:24:25.728808   71330 command_runner.go:130] > NAME="Ubuntu"
	I0811 23:24:25.728815   71330 command_runner.go:130] > VERSION_ID="22.04"
	I0811 23:24:25.728822   71330 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0811 23:24:25.728828   71330 command_runner.go:130] > VERSION_CODENAME=jammy
	I0811 23:24:25.728833   71330 command_runner.go:130] > ID=ubuntu
	I0811 23:24:25.728837   71330 command_runner.go:130] > ID_LIKE=debian
	I0811 23:24:25.728843   71330 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0811 23:24:25.728852   71330 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 23:24:25.728859   71330 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 23:24:25.728878   71330 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 23:24:25.728883   71330 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0811 23:24:25.728929   71330 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:24:25.728961   71330 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:24:25.728977   71330 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:24:25.728985   71330 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:24:25.728998   71330 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:24:25.729053   71330 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:24:25.729190   71330 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:24:25.729203   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /etc/ssl/certs/76342.pem
	I0811 23:24:25.729304   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:24:25.739953   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:24:25.768389   71330 start.go:303] post-start completed in 167.145253ms
	I0811 23:24:25.768768   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155
	I0811 23:24:25.788343   71330 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json ...
	I0811 23:24:25.788618   71330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:24:25.788672   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.806678   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:25.910983   71330 command_runner.go:130] > 11%!
	(MISSING)I0811 23:24:25.911074   71330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:24:25.916858   71330 command_runner.go:130] > 175G
	I0811 23:24:25.916888   71330 start.go:128] duration metric: createHost completed in 10.397756184s
	I0811 23:24:25.916898   71330 start.go:83] releasing machines lock for "multinode-891155", held for 10.397881536s
	I0811 23:24:25.916964   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155
	I0811 23:24:25.934982   71330 ssh_runner.go:195] Run: cat /version.json
	I0811 23:24:25.934995   71330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:24:25.935040   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.935042   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:25.953454   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:25.972274   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:26.053598   71330 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40-1690799191-16971", "minikube_version": "v1.31.1", "commit": "c9a9d1e164f9532f3819e585f7a0abf3ece27773"}
	I0811 23:24:26.054080   71330 ssh_runner.go:195] Run: systemctl --version
	I0811 23:24:26.191675   71330 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:24:26.194946   71330 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0811 23:24:26.194979   71330 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0811 23:24:26.195045   71330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:24:26.344133   71330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:24:26.349412   71330 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0811 23:24:26.349443   71330 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0811 23:24:26.349456   71330 command_runner.go:130] > Device: 36h/54d	Inode: 1302513     Links: 1
	I0811 23:24:26.349464   71330 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:24:26.349471   71330 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0811 23:24:26.349478   71330 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0811 23:24:26.349484   71330 command_runner.go:130] > Change: 2023-08-11 23:01:53.862838644 +0000
	I0811 23:24:26.349493   71330 command_runner.go:130] >  Birth: 2023-08-11 23:01:53.862838644 +0000
	I0811 23:24:26.349824   71330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:24:26.373791   71330 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:24:26.373870   71330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:24:26.414160   71330 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0811 23:24:26.414193   71330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:24:26.414201   71330 start.go:466] detecting cgroup driver to use...
	I0811 23:24:26.414235   71330 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:24:26.414288   71330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:24:26.432133   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:24:26.446346   71330 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:24:26.446407   71330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:24:26.462926   71330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:24:26.479883   71330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:24:26.571115   71330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:24:26.667782   71330 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0811 23:24:26.667808   71330 docker.go:212] disabling docker service ...
	I0811 23:24:26.667859   71330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:24:26.690294   71330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:24:26.704199   71330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:24:26.803366   71330 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0811 23:24:26.803486   71330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:24:26.911181   71330 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0811 23:24:26.911261   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:24:26.925621   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:24:26.944267   71330 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0811 23:24:26.945568   71330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0811 23:24:26.945639   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:24:26.957561   71330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0811 23:24:26.957635   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:24:26.970131   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:24:26.982473   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:24:26.994484   71330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:24:27.005560   71330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:24:27.015764   71330 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:24:27.017120   71330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:24:27.028204   71330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:24:27.126094   71330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:24:27.248964   71330 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:24:27.249103   71330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:24:27.253736   71330 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0811 23:24:27.253795   71330 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:24:27.253822   71330 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0811 23:24:27.253847   71330 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:24:27.253869   71330 command_runner.go:130] > Access: 2023-08-11 23:24:27.234948919 +0000
	I0811 23:24:27.253902   71330 command_runner.go:130] > Modify: 2023-08-11 23:24:27.234948919 +0000
	I0811 23:24:27.253927   71330 command_runner.go:130] > Change: 2023-08-11 23:24:27.234948919 +0000
	I0811 23:24:27.253947   71330 command_runner.go:130] >  Birth: -
	I0811 23:24:27.254239   71330 start.go:534] Will wait 60s for crictl version
	I0811 23:24:27.254322   71330 ssh_runner.go:195] Run: which crictl
	I0811 23:24:27.258780   71330 command_runner.go:130] > /usr/bin/crictl
	I0811 23:24:27.259291   71330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:24:27.298734   71330 command_runner.go:130] > Version:  0.1.0
	I0811 23:24:27.298791   71330 command_runner.go:130] > RuntimeName:  cri-o
	I0811 23:24:27.298813   71330 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0811 23:24:27.298836   71330 command_runner.go:130] > RuntimeApiVersion:  v1
	I0811 23:24:27.301397   71330 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:24:27.301514   71330 ssh_runner.go:195] Run: crio --version
	I0811 23:24:27.343598   71330 command_runner.go:130] > crio version 1.24.6
	I0811 23:24:27.343654   71330 command_runner.go:130] > Version:          1.24.6
	I0811 23:24:27.343684   71330 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0811 23:24:27.343704   71330 command_runner.go:130] > GitTreeState:     clean
	I0811 23:24:27.343732   71330 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0811 23:24:27.343754   71330 command_runner.go:130] > GoVersion:        go1.18.2
	I0811 23:24:27.343775   71330 command_runner.go:130] > Compiler:         gc
	I0811 23:24:27.343794   71330 command_runner.go:130] > Platform:         linux/arm64
	I0811 23:24:27.343823   71330 command_runner.go:130] > Linkmode:         dynamic
	I0811 23:24:27.343849   71330 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0811 23:24:27.343879   71330 command_runner.go:130] > SeccompEnabled:   true
	I0811 23:24:27.343906   71330 command_runner.go:130] > AppArmorEnabled:  false
	I0811 23:24:27.345778   71330 ssh_runner.go:195] Run: crio --version
	I0811 23:24:27.386389   71330 command_runner.go:130] > crio version 1.24.6
	I0811 23:24:27.386458   71330 command_runner.go:130] > Version:          1.24.6
	I0811 23:24:27.386481   71330 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0811 23:24:27.386502   71330 command_runner.go:130] > GitTreeState:     clean
	I0811 23:24:27.386524   71330 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0811 23:24:27.386551   71330 command_runner.go:130] > GoVersion:        go1.18.2
	I0811 23:24:27.386573   71330 command_runner.go:130] > Compiler:         gc
	I0811 23:24:27.386594   71330 command_runner.go:130] > Platform:         linux/arm64
	I0811 23:24:27.386613   71330 command_runner.go:130] > Linkmode:         dynamic
	I0811 23:24:27.386644   71330 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0811 23:24:27.386666   71330 command_runner.go:130] > SeccompEnabled:   true
	I0811 23:24:27.386688   71330 command_runner.go:130] > AppArmorEnabled:  false
	I0811 23:24:27.390719   71330 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0811 23:24:27.392354   71330 cli_runner.go:164] Run: docker network inspect multinode-891155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:24:27.410151   71330 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0811 23:24:27.414818   71330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:24:27.428221   71330 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:24:27.428289   71330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:24:27.488905   71330 command_runner.go:130] > {
	I0811 23:24:27.488926   71330 command_runner.go:130] >   "images": [
	I0811 23:24:27.488931   71330 command_runner.go:130] >     {
	I0811 23:24:27.488941   71330 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0811 23:24:27.488948   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.488955   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0811 23:24:27.488959   71330 command_runner.go:130] >       ],
	I0811 23:24:27.488965   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.488980   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0811 23:24:27.488990   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0811 23:24:27.488999   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489004   71330 command_runner.go:130] >       "size": "60881430",
	I0811 23:24:27.489010   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.489019   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489025   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489033   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489037   71330 command_runner.go:130] >     },
	I0811 23:24:27.489046   71330 command_runner.go:130] >     {
	I0811 23:24:27.489054   71330 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0811 23:24:27.489062   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489069   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0811 23:24:27.489074   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489098   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489109   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0811 23:24:27.489119   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0811 23:24:27.489126   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489132   71330 command_runner.go:130] >       "size": "29037500",
	I0811 23:24:27.489137   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.489142   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489149   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489154   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489158   71330 command_runner.go:130] >     },
	I0811 23:24:27.489162   71330 command_runner.go:130] >     {
	I0811 23:24:27.489170   71330 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0811 23:24:27.489175   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489181   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0811 23:24:27.489185   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489191   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489200   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0811 23:24:27.489209   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0811 23:24:27.489217   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489223   71330 command_runner.go:130] >       "size": "51393451",
	I0811 23:24:27.489231   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.489244   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489253   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489258   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489265   71330 command_runner.go:130] >     },
	I0811 23:24:27.489269   71330 command_runner.go:130] >     {
	I0811 23:24:27.489282   71330 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0811 23:24:27.489287   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489293   71330 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0811 23:24:27.489300   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489306   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489317   71330 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0811 23:24:27.489330   71330 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0811 23:24:27.489342   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489350   71330 command_runner.go:130] >       "size": "182283991",
	I0811 23:24:27.489355   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.489360   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.489366   71330 command_runner.go:130] >       },
	I0811 23:24:27.489371   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489377   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489385   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489390   71330 command_runner.go:130] >     },
	I0811 23:24:27.489397   71330 command_runner.go:130] >     {
	I0811 23:24:27.489405   71330 command_runner.go:130] >       "id": "64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388",
	I0811 23:24:27.489414   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489420   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0811 23:24:27.489428   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489433   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489446   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0811 23:24:27.489456   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"
	I0811 23:24:27.489463   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489469   71330 command_runner.go:130] >       "size": "116270032",
	I0811 23:24:27.489477   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.489482   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.489490   71330 command_runner.go:130] >       },
	I0811 23:24:27.489495   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489503   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489508   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489515   71330 command_runner.go:130] >     },
	I0811 23:24:27.489519   71330 command_runner.go:130] >     {
	I0811 23:24:27.489527   71330 command_runner.go:130] >       "id": "389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2",
	I0811 23:24:27.489535   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489542   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0811 23:24:27.489550   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489556   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489568   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0811 23:24:27.489584   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"
	I0811 23:24:27.489592   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489597   71330 command_runner.go:130] >       "size": "108667702",
	I0811 23:24:27.489602   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.489607   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.489614   71330 command_runner.go:130] >       },
	I0811 23:24:27.489620   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489627   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489633   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489641   71330 command_runner.go:130] >     },
	I0811 23:24:27.489646   71330 command_runner.go:130] >     {
	I0811 23:24:27.489656   71330 command_runner.go:130] >       "id": "532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317",
	I0811 23:24:27.489664   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489670   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0811 23:24:27.489675   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489683   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489692   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0811 23:24:27.489704   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"
	I0811 23:24:27.489712   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489717   71330 command_runner.go:130] >       "size": "68099991",
	I0811 23:24:27.489725   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.489730   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489738   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489743   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489751   71330 command_runner.go:130] >     },
	I0811 23:24:27.489755   71330 command_runner.go:130] >     {
	I0811 23:24:27.489763   71330 command_runner.go:130] >       "id": "6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085",
	I0811 23:24:27.489771   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489778   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0811 23:24:27.489786   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489792   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489831   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8",
	I0811 23:24:27.489846   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"
	I0811 23:24:27.489851   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489856   71330 command_runner.go:130] >       "size": "57615158",
	I0811 23:24:27.489864   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.489869   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.489877   71330 command_runner.go:130] >       },
	I0811 23:24:27.489883   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.489891   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.489896   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.489903   71330 command_runner.go:130] >     },
	I0811 23:24:27.489908   71330 command_runner.go:130] >     {
	I0811 23:24:27.489919   71330 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0811 23:24:27.489924   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.489930   71330 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0811 23:24:27.489938   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489943   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.489955   71330 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0811 23:24:27.489968   71330 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0811 23:24:27.489976   71330 command_runner.go:130] >       ],
	I0811 23:24:27.489981   71330 command_runner.go:130] >       "size": "520014",
	I0811 23:24:27.489989   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.489994   71330 command_runner.go:130] >         "value": "65535"
	I0811 23:24:27.489999   71330 command_runner.go:130] >       },
	I0811 23:24:27.490006   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.490011   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.490021   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.490026   71330 command_runner.go:130] >     }
	I0811 23:24:27.490035   71330 command_runner.go:130] >   ]
	I0811 23:24:27.490042   71330 command_runner.go:130] > }
	I0811 23:24:27.492480   71330 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:24:27.492501   71330 crio.go:415] Images already preloaded, skipping extraction
	I0811 23:24:27.492559   71330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:24:27.531008   71330 command_runner.go:130] > {
	I0811 23:24:27.531030   71330 command_runner.go:130] >   "images": [
	I0811 23:24:27.531035   71330 command_runner.go:130] >     {
	I0811 23:24:27.531045   71330 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0811 23:24:27.531051   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531059   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0811 23:24:27.531064   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531069   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531080   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0811 23:24:27.531092   71330 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0811 23:24:27.531098   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531104   71330 command_runner.go:130] >       "size": "60881430",
	I0811 23:24:27.531111   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.531117   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531129   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531134   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531139   71330 command_runner.go:130] >     },
	I0811 23:24:27.531145   71330 command_runner.go:130] >     {
	I0811 23:24:27.531153   71330 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0811 23:24:27.531161   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531167   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0811 23:24:27.531172   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531177   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531187   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0811 23:24:27.531197   71330 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0811 23:24:27.531205   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531213   71330 command_runner.go:130] >       "size": "29037500",
	I0811 23:24:27.531218   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.531226   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531231   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531236   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531241   71330 command_runner.go:130] >     },
	I0811 23:24:27.531245   71330 command_runner.go:130] >     {
	I0811 23:24:27.531255   71330 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0811 23:24:27.531260   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531267   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0811 23:24:27.531274   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531279   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531292   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0811 23:24:27.531304   71330 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0811 23:24:27.531309   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531317   71330 command_runner.go:130] >       "size": "51393451",
	I0811 23:24:27.531322   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.531328   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531333   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531340   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531345   71330 command_runner.go:130] >     },
	I0811 23:24:27.531349   71330 command_runner.go:130] >     {
	I0811 23:24:27.531359   71330 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0811 23:24:27.531367   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531373   71330 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0811 23:24:27.531377   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531382   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531393   71330 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0811 23:24:27.531415   71330 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0811 23:24:27.531433   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531441   71330 command_runner.go:130] >       "size": "182283991",
	I0811 23:24:27.531446   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.531451   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.531455   71330 command_runner.go:130] >       },
	I0811 23:24:27.531460   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531467   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531472   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531477   71330 command_runner.go:130] >     },
	I0811 23:24:27.531485   71330 command_runner.go:130] >     {
	I0811 23:24:27.531493   71330 command_runner.go:130] >       "id": "64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388",
	I0811 23:24:27.531498   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531506   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0811 23:24:27.531512   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531520   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531532   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0811 23:24:27.531543   71330 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"
	I0811 23:24:27.531548   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531553   71330 command_runner.go:130] >       "size": "116270032",
	I0811 23:24:27.531558   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.531565   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.531570   71330 command_runner.go:130] >       },
	I0811 23:24:27.531577   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531582   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531587   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531591   71330 command_runner.go:130] >     },
	I0811 23:24:27.531596   71330 command_runner.go:130] >     {
	I0811 23:24:27.531604   71330 command_runner.go:130] >       "id": "389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2",
	I0811 23:24:27.531611   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531618   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0811 23:24:27.531622   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531629   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531639   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0811 23:24:27.531653   71330 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"
	I0811 23:24:27.531658   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531667   71330 command_runner.go:130] >       "size": "108667702",
	I0811 23:24:27.531671   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.531683   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.531687   71330 command_runner.go:130] >       },
	I0811 23:24:27.531695   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531702   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531707   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531714   71330 command_runner.go:130] >     },
	I0811 23:24:27.531718   71330 command_runner.go:130] >     {
	I0811 23:24:27.531726   71330 command_runner.go:130] >       "id": "532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317",
	I0811 23:24:27.531733   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531739   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0811 23:24:27.531744   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531749   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531760   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0811 23:24:27.531769   71330 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"
	I0811 23:24:27.531776   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531781   71330 command_runner.go:130] >       "size": "68099991",
	I0811 23:24:27.531786   71330 command_runner.go:130] >       "uid": null,
	I0811 23:24:27.531791   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531798   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531804   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531808   71330 command_runner.go:130] >     },
	I0811 23:24:27.531814   71330 command_runner.go:130] >     {
	I0811 23:24:27.531827   71330 command_runner.go:130] >       "id": "6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085",
	I0811 23:24:27.531834   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531841   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0811 23:24:27.531846   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531851   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.531892   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8",
	I0811 23:24:27.531907   71330 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"
	I0811 23:24:27.531912   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531917   71330 command_runner.go:130] >       "size": "57615158",
	I0811 23:24:27.531924   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.531929   71330 command_runner.go:130] >         "value": "0"
	I0811 23:24:27.531934   71330 command_runner.go:130] >       },
	I0811 23:24:27.531940   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.531948   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.531953   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.531960   71330 command_runner.go:130] >     },
	I0811 23:24:27.531964   71330 command_runner.go:130] >     {
	I0811 23:24:27.531971   71330 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0811 23:24:27.531979   71330 command_runner.go:130] >       "repoTags": [
	I0811 23:24:27.531984   71330 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0811 23:24:27.531989   71330 command_runner.go:130] >       ],
	I0811 23:24:27.531998   71330 command_runner.go:130] >       "repoDigests": [
	I0811 23:24:27.532007   71330 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0811 23:24:27.532018   71330 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0811 23:24:27.532022   71330 command_runner.go:130] >       ],
	I0811 23:24:27.532030   71330 command_runner.go:130] >       "size": "520014",
	I0811 23:24:27.532037   71330 command_runner.go:130] >       "uid": {
	I0811 23:24:27.532043   71330 command_runner.go:130] >         "value": "65535"
	I0811 23:24:27.532047   71330 command_runner.go:130] >       },
	I0811 23:24:27.532055   71330 command_runner.go:130] >       "username": "",
	I0811 23:24:27.532059   71330 command_runner.go:130] >       "spec": null,
	I0811 23:24:27.532064   71330 command_runner.go:130] >       "pinned": false
	I0811 23:24:27.532070   71330 command_runner.go:130] >     }
	I0811 23:24:27.532075   71330 command_runner.go:130] >   ]
	I0811 23:24:27.532081   71330 command_runner.go:130] > }
	I0811 23:24:27.534749   71330 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:24:27.534773   71330 cache_images.go:84] Images are preloaded, skipping loading
	I0811 23:24:27.534852   71330 ssh_runner.go:195] Run: crio config
	I0811 23:24:27.584056   71330 command_runner.go:130] ! time="2023-08-11 23:24:27.583724553Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0811 23:24:27.585312   71330 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0811 23:24:27.591192   71330 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0811 23:24:27.591218   71330 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0811 23:24:27.591227   71330 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0811 23:24:27.591241   71330 command_runner.go:130] > #
	I0811 23:24:27.591250   71330 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0811 23:24:27.591259   71330 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0811 23:24:27.591266   71330 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0811 23:24:27.591275   71330 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0811 23:24:27.591279   71330 command_runner.go:130] > # reload'.
	I0811 23:24:27.591286   71330 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0811 23:24:27.591294   71330 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0811 23:24:27.591301   71330 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0811 23:24:27.591309   71330 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0811 23:24:27.591313   71330 command_runner.go:130] > [crio]
	I0811 23:24:27.591320   71330 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0811 23:24:27.591326   71330 command_runner.go:130] > # containers images, in this directory.
	I0811 23:24:27.591333   71330 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0811 23:24:27.591341   71330 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0811 23:24:27.591347   71330 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0811 23:24:27.591355   71330 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0811 23:24:27.591363   71330 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0811 23:24:27.591368   71330 command_runner.go:130] > # storage_driver = "vfs"
	I0811 23:24:27.591375   71330 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0811 23:24:27.591382   71330 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0811 23:24:27.591387   71330 command_runner.go:130] > # storage_option = [
	I0811 23:24:27.591391   71330 command_runner.go:130] > # ]
	I0811 23:24:27.591399   71330 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0811 23:24:27.591406   71330 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0811 23:24:27.591411   71330 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0811 23:24:27.591418   71330 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0811 23:24:27.591426   71330 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0811 23:24:27.591431   71330 command_runner.go:130] > # always happen on a node reboot
	I0811 23:24:27.591438   71330 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0811 23:24:27.591444   71330 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0811 23:24:27.591451   71330 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0811 23:24:27.591460   71330 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0811 23:24:27.591467   71330 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0811 23:24:27.591477   71330 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0811 23:24:27.591487   71330 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0811 23:24:27.591491   71330 command_runner.go:130] > # internal_wipe = true
	I0811 23:24:27.591500   71330 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0811 23:24:27.591508   71330 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0811 23:24:27.591514   71330 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0811 23:24:27.591521   71330 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0811 23:24:27.591528   71330 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0811 23:24:27.591532   71330 command_runner.go:130] > [crio.api]
	I0811 23:24:27.591539   71330 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0811 23:24:27.591544   71330 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0811 23:24:27.591551   71330 command_runner.go:130] > # IP address on which the stream server will listen.
	I0811 23:24:27.591556   71330 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0811 23:24:27.591564   71330 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0811 23:24:27.591570   71330 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0811 23:24:27.591575   71330 command_runner.go:130] > # stream_port = "0"
	I0811 23:24:27.591581   71330 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0811 23:24:27.591586   71330 command_runner.go:130] > # stream_enable_tls = false
	I0811 23:24:27.591593   71330 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0811 23:24:27.591598   71330 command_runner.go:130] > # stream_idle_timeout = ""
	I0811 23:24:27.591606   71330 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0811 23:24:27.591613   71330 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0811 23:24:27.591617   71330 command_runner.go:130] > # minutes.
	I0811 23:24:27.591623   71330 command_runner.go:130] > # stream_tls_cert = ""
	I0811 23:24:27.591630   71330 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0811 23:24:27.591638   71330 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0811 23:24:27.591644   71330 command_runner.go:130] > # stream_tls_key = ""
	I0811 23:24:27.591651   71330 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0811 23:24:27.591659   71330 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0811 23:24:27.591665   71330 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0811 23:24:27.591672   71330 command_runner.go:130] > # stream_tls_ca = ""
	I0811 23:24:27.591689   71330 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0811 23:24:27.591695   71330 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0811 23:24:27.591704   71330 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0811 23:24:27.591709   71330 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0811 23:24:27.591724   71330 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0811 23:24:27.591731   71330 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0811 23:24:27.591736   71330 command_runner.go:130] > [crio.runtime]
	I0811 23:24:27.591743   71330 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0811 23:24:27.591750   71330 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0811 23:24:27.591755   71330 command_runner.go:130] > # "nofile=1024:2048"
	I0811 23:24:27.591765   71330 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0811 23:24:27.591770   71330 command_runner.go:130] > # default_ulimits = [
	I0811 23:24:27.591774   71330 command_runner.go:130] > # ]
	I0811 23:24:27.591781   71330 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0811 23:24:27.591786   71330 command_runner.go:130] > # no_pivot = false
	I0811 23:24:27.591793   71330 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0811 23:24:27.591800   71330 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0811 23:24:27.591806   71330 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0811 23:24:27.591813   71330 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0811 23:24:27.591819   71330 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0811 23:24:27.591827   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0811 23:24:27.591831   71330 command_runner.go:130] > # conmon = ""
	I0811 23:24:27.591837   71330 command_runner.go:130] > # Cgroup setting for conmon
	I0811 23:24:27.591845   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0811 23:24:27.591850   71330 command_runner.go:130] > conmon_cgroup = "pod"
	I0811 23:24:27.591857   71330 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0811 23:24:27.591863   71330 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0811 23:24:27.591872   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0811 23:24:27.591876   71330 command_runner.go:130] > # conmon_env = [
	I0811 23:24:27.591880   71330 command_runner.go:130] > # ]
	I0811 23:24:27.591887   71330 command_runner.go:130] > # Additional environment variables to set for all the
	I0811 23:24:27.591893   71330 command_runner.go:130] > # containers. These are overridden if set in the
	I0811 23:24:27.591900   71330 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0811 23:24:27.591905   71330 command_runner.go:130] > # default_env = [
	I0811 23:24:27.591909   71330 command_runner.go:130] > # ]
	I0811 23:24:27.591916   71330 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0811 23:24:27.591920   71330 command_runner.go:130] > # selinux = false
	I0811 23:24:27.591928   71330 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0811 23:24:27.591936   71330 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0811 23:24:27.591942   71330 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0811 23:24:27.591948   71330 command_runner.go:130] > # seccomp_profile = ""
	I0811 23:24:27.591954   71330 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0811 23:24:27.591962   71330 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0811 23:24:27.591969   71330 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0811 23:24:27.591976   71330 command_runner.go:130] > # which might increase security.
	I0811 23:24:27.591982   71330 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0811 23:24:27.591990   71330 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0811 23:24:27.591997   71330 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0811 23:24:27.592005   71330 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0811 23:24:27.592014   71330 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0811 23:24:27.592020   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:24:27.592026   71330 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0811 23:24:27.592033   71330 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0811 23:24:27.592038   71330 command_runner.go:130] > # the cgroup blockio controller.
	I0811 23:24:27.592044   71330 command_runner.go:130] > # blockio_config_file = ""
	I0811 23:24:27.592052   71330 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0811 23:24:27.592056   71330 command_runner.go:130] > # irqbalance daemon.
	I0811 23:24:27.592063   71330 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0811 23:24:27.592070   71330 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0811 23:24:27.592077   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:24:27.592081   71330 command_runner.go:130] > # rdt_config_file = ""
	I0811 23:24:27.592088   71330 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0811 23:24:27.592093   71330 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0811 23:24:27.592100   71330 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0811 23:24:27.592106   71330 command_runner.go:130] > # separate_pull_cgroup = ""
	I0811 23:24:27.592114   71330 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0811 23:24:27.592122   71330 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0811 23:24:27.592126   71330 command_runner.go:130] > # will be added.
	I0811 23:24:27.592132   71330 command_runner.go:130] > # default_capabilities = [
	I0811 23:24:27.592136   71330 command_runner.go:130] > # 	"CHOWN",
	I0811 23:24:27.592141   71330 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0811 23:24:27.592146   71330 command_runner.go:130] > # 	"FSETID",
	I0811 23:24:27.592151   71330 command_runner.go:130] > # 	"FOWNER",
	I0811 23:24:27.592155   71330 command_runner.go:130] > # 	"SETGID",
	I0811 23:24:27.592160   71330 command_runner.go:130] > # 	"SETUID",
	I0811 23:24:27.592164   71330 command_runner.go:130] > # 	"SETPCAP",
	I0811 23:24:27.592169   71330 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0811 23:24:27.592174   71330 command_runner.go:130] > # 	"KILL",
	I0811 23:24:27.592178   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592187   71330 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0811 23:24:27.592195   71330 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0811 23:24:27.592201   71330 command_runner.go:130] > # add_inheritable_capabilities = true
	I0811 23:24:27.592209   71330 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0811 23:24:27.592216   71330 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0811 23:24:27.592222   71330 command_runner.go:130] > # default_sysctls = [
	I0811 23:24:27.592226   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592231   71330 command_runner.go:130] > # List of devices on the host that a
	I0811 23:24:27.592239   71330 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0811 23:24:27.592244   71330 command_runner.go:130] > # allowed_devices = [
	I0811 23:24:27.592248   71330 command_runner.go:130] > # 	"/dev/fuse",
	I0811 23:24:27.592252   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592259   71330 command_runner.go:130] > # List of additional devices. specified as
	I0811 23:24:27.592282   71330 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0811 23:24:27.592289   71330 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0811 23:24:27.592296   71330 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0811 23:24:27.592301   71330 command_runner.go:130] > # additional_devices = [
	I0811 23:24:27.592307   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592315   71330 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0811 23:24:27.592320   71330 command_runner.go:130] > # cdi_spec_dirs = [
	I0811 23:24:27.592325   71330 command_runner.go:130] > # 	"/etc/cdi",
	I0811 23:24:27.592330   71330 command_runner.go:130] > # 	"/var/run/cdi",
	I0811 23:24:27.592334   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592342   71330 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0811 23:24:27.592349   71330 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0811 23:24:27.592354   71330 command_runner.go:130] > # Defaults to false.
	I0811 23:24:27.592360   71330 command_runner.go:130] > # device_ownership_from_security_context = false
	I0811 23:24:27.592368   71330 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0811 23:24:27.592375   71330 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0811 23:24:27.592380   71330 command_runner.go:130] > # hooks_dir = [
	I0811 23:24:27.592386   71330 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0811 23:24:27.592390   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592398   71330 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0811 23:24:27.592406   71330 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0811 23:24:27.592412   71330 command_runner.go:130] > # its default mounts from the following two files:
	I0811 23:24:27.592416   71330 command_runner.go:130] > #
	I0811 23:24:27.592424   71330 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0811 23:24:27.592431   71330 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0811 23:24:27.592439   71330 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0811 23:24:27.592442   71330 command_runner.go:130] > #
	I0811 23:24:27.592450   71330 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0811 23:24:27.592458   71330 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0811 23:24:27.592465   71330 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0811 23:24:27.592471   71330 command_runner.go:130] > #      only add mounts it finds in this file.
	I0811 23:24:27.592475   71330 command_runner.go:130] > #
	I0811 23:24:27.592480   71330 command_runner.go:130] > # default_mounts_file = ""
	I0811 23:24:27.592488   71330 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0811 23:24:27.592496   71330 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0811 23:24:27.592500   71330 command_runner.go:130] > # pids_limit = 0
	I0811 23:24:27.592508   71330 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0811 23:24:27.592515   71330 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0811 23:24:27.592523   71330 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0811 23:24:27.592534   71330 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0811 23:24:27.592538   71330 command_runner.go:130] > # log_size_max = -1
	I0811 23:24:27.592547   71330 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0811 23:24:27.592551   71330 command_runner.go:130] > # log_to_journald = false
	I0811 23:24:27.592559   71330 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0811 23:24:27.592565   71330 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0811 23:24:27.592571   71330 command_runner.go:130] > # Path to directory for container attach sockets.
	I0811 23:24:27.592577   71330 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0811 23:24:27.592584   71330 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0811 23:24:27.592589   71330 command_runner.go:130] > # bind_mount_prefix = ""
	I0811 23:24:27.592595   71330 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0811 23:24:27.592600   71330 command_runner.go:130] > # read_only = false
	I0811 23:24:27.592608   71330 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0811 23:24:27.592615   71330 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0811 23:24:27.592620   71330 command_runner.go:130] > # live configuration reload.
	I0811 23:24:27.592625   71330 command_runner.go:130] > # log_level = "info"
	I0811 23:24:27.592632   71330 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0811 23:24:27.592638   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:24:27.592643   71330 command_runner.go:130] > # log_filter = ""
	I0811 23:24:27.592650   71330 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0811 23:24:27.592657   71330 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0811 23:24:27.592662   71330 command_runner.go:130] > # separated by comma.
	I0811 23:24:27.592667   71330 command_runner.go:130] > # uid_mappings = ""
	I0811 23:24:27.592675   71330 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0811 23:24:27.592682   71330 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0811 23:24:27.592686   71330 command_runner.go:130] > # separated by comma.
	I0811 23:24:27.592691   71330 command_runner.go:130] > # gid_mappings = ""
	I0811 23:24:27.592698   71330 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0811 23:24:27.592706   71330 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0811 23:24:27.592713   71330 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0811 23:24:27.592718   71330 command_runner.go:130] > # minimum_mappable_uid = -1
	I0811 23:24:27.592726   71330 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0811 23:24:27.592734   71330 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0811 23:24:27.592741   71330 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0811 23:24:27.592746   71330 command_runner.go:130] > # minimum_mappable_gid = -1
	I0811 23:24:27.592753   71330 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0811 23:24:27.592760   71330 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0811 23:24:27.592769   71330 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0811 23:24:27.592774   71330 command_runner.go:130] > # ctr_stop_timeout = 30
	I0811 23:24:27.592781   71330 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0811 23:24:27.592789   71330 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0811 23:24:27.592795   71330 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0811 23:24:27.592801   71330 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0811 23:24:27.592806   71330 command_runner.go:130] > # drop_infra_ctr = true
	I0811 23:24:27.592813   71330 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0811 23:24:27.592820   71330 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0811 23:24:27.592828   71330 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0811 23:24:27.592833   71330 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0811 23:24:27.592841   71330 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0811 23:24:27.592847   71330 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0811 23:24:27.592852   71330 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0811 23:24:27.592860   71330 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0811 23:24:27.592865   71330 command_runner.go:130] > # pinns_path = ""
	I0811 23:24:27.592872   71330 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0811 23:24:27.592879   71330 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0811 23:24:27.592887   71330 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0811 23:24:27.592892   71330 command_runner.go:130] > # default_runtime = "runc"
	I0811 23:24:27.592898   71330 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0811 23:24:27.592907   71330 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0811 23:24:27.592918   71330 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0811 23:24:27.592923   71330 command_runner.go:130] > # creation as a file is not desired either.
	I0811 23:24:27.592934   71330 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0811 23:24:27.592939   71330 command_runner.go:130] > # the hostname is being managed dynamically.
	I0811 23:24:27.592945   71330 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0811 23:24:27.592949   71330 command_runner.go:130] > # ]
	I0811 23:24:27.592956   71330 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0811 23:24:27.592964   71330 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0811 23:24:27.592972   71330 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0811 23:24:27.592980   71330 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0811 23:24:27.592983   71330 command_runner.go:130] > #
	I0811 23:24:27.592989   71330 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0811 23:24:27.592995   71330 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0811 23:24:27.593000   71330 command_runner.go:130] > #  runtime_type = "oci"
	I0811 23:24:27.593007   71330 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0811 23:24:27.593014   71330 command_runner.go:130] > #  privileged_without_host_devices = false
	I0811 23:24:27.593019   71330 command_runner.go:130] > #  allowed_annotations = []
	I0811 23:24:27.593023   71330 command_runner.go:130] > # Where:
	I0811 23:24:27.593030   71330 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0811 23:24:27.593037   71330 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0811 23:24:27.593045   71330 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0811 23:24:27.593052   71330 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0811 23:24:27.593057   71330 command_runner.go:130] > #   in $PATH.
	I0811 23:24:27.593064   71330 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0811 23:24:27.593070   71330 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0811 23:24:27.593078   71330 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0811 23:24:27.593112   71330 command_runner.go:130] > #   state.
	I0811 23:24:27.593131   71330 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0811 23:24:27.593138   71330 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0811 23:24:27.593146   71330 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0811 23:24:27.593152   71330 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0811 23:24:27.593169   71330 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0811 23:24:27.593185   71330 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0811 23:24:27.593195   71330 command_runner.go:130] > #   The currently recognized values are:
	I0811 23:24:27.593203   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0811 23:24:27.593212   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0811 23:24:27.593222   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0811 23:24:27.593229   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0811 23:24:27.593241   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0811 23:24:27.593255   71330 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0811 23:24:27.593264   71330 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0811 23:24:27.593272   71330 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0811 23:24:27.593278   71330 command_runner.go:130] > #   should be moved to the container's cgroup
	I0811 23:24:27.593284   71330 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0811 23:24:27.593292   71330 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0811 23:24:27.593304   71330 command_runner.go:130] > runtime_type = "oci"
	I0811 23:24:27.593309   71330 command_runner.go:130] > runtime_root = "/run/runc"
	I0811 23:24:27.593314   71330 command_runner.go:130] > runtime_config_path = ""
	I0811 23:24:27.593322   71330 command_runner.go:130] > monitor_path = ""
	I0811 23:24:27.593326   71330 command_runner.go:130] > monitor_cgroup = ""
	I0811 23:24:27.593332   71330 command_runner.go:130] > monitor_exec_cgroup = ""
	I0811 23:24:27.593371   71330 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0811 23:24:27.593380   71330 command_runner.go:130] > # running containers
	I0811 23:24:27.593387   71330 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0811 23:24:27.593395   71330 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0811 23:24:27.593405   71330 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0811 23:24:27.593414   71330 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0811 23:24:27.593420   71330 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0811 23:24:27.593426   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0811 23:24:27.593434   71330 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0811 23:24:27.593439   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0811 23:24:27.593445   71330 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0811 23:24:27.593451   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0811 23:24:27.593461   71330 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0811 23:24:27.593470   71330 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0811 23:24:27.593477   71330 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0811 23:24:27.593487   71330 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0811 23:24:27.593499   71330 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0811 23:24:27.593506   71330 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0811 23:24:27.593522   71330 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0811 23:24:27.593532   71330 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0811 23:24:27.593546   71330 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0811 23:24:27.593558   71330 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0811 23:24:27.593562   71330 command_runner.go:130] > # Example:
	I0811 23:24:27.593568   71330 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0811 23:24:27.593577   71330 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0811 23:24:27.593583   71330 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0811 23:24:27.593591   71330 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0811 23:24:27.593598   71330 command_runner.go:130] > # cpuset = 0
	I0811 23:24:27.593602   71330 command_runner.go:130] > # cpushares = "0-1"
	I0811 23:24:27.593607   71330 command_runner.go:130] > # Where:
	I0811 23:24:27.593612   71330 command_runner.go:130] > # The workload name is workload-type.
	I0811 23:24:27.593621   71330 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0811 23:24:27.593630   71330 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0811 23:24:27.593637   71330 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0811 23:24:27.593646   71330 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0811 23:24:27.593660   71330 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0811 23:24:27.593664   71330 command_runner.go:130] > # 
	I0811 23:24:27.593674   71330 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0811 23:24:27.593681   71330 command_runner.go:130] > #
	I0811 23:24:27.593688   71330 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0811 23:24:27.593696   71330 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0811 23:24:27.593706   71330 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0811 23:24:27.593714   71330 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0811 23:24:27.593723   71330 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0811 23:24:27.593728   71330 command_runner.go:130] > [crio.image]
	I0811 23:24:27.593736   71330 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0811 23:24:27.593744   71330 command_runner.go:130] > # default_transport = "docker://"
	I0811 23:24:27.593753   71330 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0811 23:24:27.593763   71330 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0811 23:24:27.593768   71330 command_runner.go:130] > # global_auth_file = ""
	I0811 23:24:27.593774   71330 command_runner.go:130] > # The image used to instantiate infra containers.
	I0811 23:24:27.593780   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:24:27.593788   71330 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0811 23:24:27.593797   71330 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0811 23:24:27.593812   71330 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0811 23:24:27.593818   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:24:27.593830   71330 command_runner.go:130] > # pause_image_auth_file = ""
	I0811 23:24:27.593838   71330 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0811 23:24:27.593847   71330 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0811 23:24:27.593855   71330 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0811 23:24:27.593862   71330 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0811 23:24:27.593867   71330 command_runner.go:130] > # pause_command = "/pause"
	I0811 23:24:27.593877   71330 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0811 23:24:27.593887   71330 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0811 23:24:27.593895   71330 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0811 23:24:27.593905   71330 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0811 23:24:27.593913   71330 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0811 23:24:27.593918   71330 command_runner.go:130] > # signature_policy = ""
	I0811 23:24:27.593929   71330 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0811 23:24:27.593940   71330 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0811 23:24:27.593946   71330 command_runner.go:130] > # changing them here.
	I0811 23:24:27.593953   71330 command_runner.go:130] > # insecure_registries = [
	I0811 23:24:27.593959   71330 command_runner.go:130] > # ]
	I0811 23:24:27.593967   71330 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0811 23:24:27.593977   71330 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0811 23:24:27.593982   71330 command_runner.go:130] > # image_volumes = "mkdir"
	I0811 23:24:27.593988   71330 command_runner.go:130] > # Temporary directory to use for storing big files
	I0811 23:24:27.593996   71330 command_runner.go:130] > # big_files_temporary_dir = ""
	I0811 23:24:27.594004   71330 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0811 23:24:27.594011   71330 command_runner.go:130] > # CNI plugins.
	I0811 23:24:27.594015   71330 command_runner.go:130] > [crio.network]
	I0811 23:24:27.594022   71330 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0811 23:24:27.594029   71330 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0811 23:24:27.594034   71330 command_runner.go:130] > # cni_default_network = ""
	I0811 23:24:27.594044   71330 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0811 23:24:27.594057   71330 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0811 23:24:27.594064   71330 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0811 23:24:27.594071   71330 command_runner.go:130] > # plugin_dirs = [
	I0811 23:24:27.594075   71330 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0811 23:24:27.594080   71330 command_runner.go:130] > # ]
	I0811 23:24:27.594089   71330 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0811 23:24:27.594094   71330 command_runner.go:130] > [crio.metrics]
	I0811 23:24:27.594100   71330 command_runner.go:130] > # Globally enable or disable metrics support.
	I0811 23:24:27.594109   71330 command_runner.go:130] > # enable_metrics = false
	I0811 23:24:27.594115   71330 command_runner.go:130] > # Specify enabled metrics collectors.
	I0811 23:24:27.594123   71330 command_runner.go:130] > # Per default all metrics are enabled.
	I0811 23:24:27.594131   71330 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0811 23:24:27.594141   71330 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0811 23:24:27.594148   71330 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0811 23:24:27.594156   71330 command_runner.go:130] > # metrics_collectors = [
	I0811 23:24:27.594160   71330 command_runner.go:130] > # 	"operations",
	I0811 23:24:27.594167   71330 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0811 23:24:27.594174   71330 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0811 23:24:27.594179   71330 command_runner.go:130] > # 	"operations_errors",
	I0811 23:24:27.594184   71330 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0811 23:24:27.594189   71330 command_runner.go:130] > # 	"image_pulls_by_name",
	I0811 23:24:27.594195   71330 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0811 23:24:27.594202   71330 command_runner.go:130] > # 	"image_pulls_failures",
	I0811 23:24:27.594207   71330 command_runner.go:130] > # 	"image_pulls_successes",
	I0811 23:24:27.594215   71330 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0811 23:24:27.594220   71330 command_runner.go:130] > # 	"image_layer_reuse",
	I0811 23:24:27.594226   71330 command_runner.go:130] > # 	"containers_oom_total",
	I0811 23:24:27.594233   71330 command_runner.go:130] > # 	"containers_oom",
	I0811 23:24:27.594238   71330 command_runner.go:130] > # 	"processes_defunct",
	I0811 23:24:27.594243   71330 command_runner.go:130] > # 	"operations_total",
	I0811 23:24:27.594251   71330 command_runner.go:130] > # 	"operations_latency_seconds",
	I0811 23:24:27.594258   71330 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0811 23:24:27.594264   71330 command_runner.go:130] > # 	"operations_errors_total",
	I0811 23:24:27.594272   71330 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0811 23:24:27.594277   71330 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0811 23:24:27.594284   71330 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0811 23:24:27.594291   71330 command_runner.go:130] > # 	"image_pulls_success_total",
	I0811 23:24:27.594297   71330 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0811 23:24:27.594304   71330 command_runner.go:130] > # 	"containers_oom_count_total",
	I0811 23:24:27.594308   71330 command_runner.go:130] > # ]
	I0811 23:24:27.594315   71330 command_runner.go:130] > # The port on which the metrics server will listen.
	I0811 23:24:27.594321   71330 command_runner.go:130] > # metrics_port = 9090
	I0811 23:24:27.594328   71330 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0811 23:24:27.594337   71330 command_runner.go:130] > # metrics_socket = ""
	I0811 23:24:27.594344   71330 command_runner.go:130] > # The certificate for the secure metrics server.
	I0811 23:24:27.594351   71330 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0811 23:24:27.594358   71330 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0811 23:24:27.594366   71330 command_runner.go:130] > # certificate on any modification event.
	I0811 23:24:27.594374   71330 command_runner.go:130] > # metrics_cert = ""
	I0811 23:24:27.594383   71330 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0811 23:24:27.594392   71330 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0811 23:24:27.594397   71330 command_runner.go:130] > # metrics_key = ""
	I0811 23:24:27.594404   71330 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0811 23:24:27.594412   71330 command_runner.go:130] > [crio.tracing]
	I0811 23:24:27.594418   71330 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0811 23:24:27.594423   71330 command_runner.go:130] > # enable_tracing = false
	I0811 23:24:27.594433   71330 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0811 23:24:27.594438   71330 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0811 23:24:27.594447   71330 command_runner.go:130] > # Number of samples to collect per million spans.
	I0811 23:24:27.594456   71330 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0811 23:24:27.594463   71330 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0811 23:24:27.594469   71330 command_runner.go:130] > [crio.stats]
	I0811 23:24:27.594479   71330 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0811 23:24:27.594485   71330 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0811 23:24:27.594494   71330 command_runner.go:130] > # stats_collection_period = 0
	I0811 23:24:27.594575   71330 cni.go:84] Creating CNI manager for ""
	I0811 23:24:27.594585   71330 cni.go:136] 1 nodes found, recommending kindnet
	I0811 23:24:27.594614   71330 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:24:27.594638   71330 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-891155 NodeName:multinode-891155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:24:27.594784   71330 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-891155"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:24:27.594860   71330 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-891155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:24:27.594931   71330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:24:27.604501   71330 command_runner.go:130] > kubeadm
	I0811 23:24:27.604518   71330 command_runner.go:130] > kubectl
	I0811 23:24:27.604527   71330 command_runner.go:130] > kubelet
	I0811 23:24:27.605706   71330 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:24:27.605781   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:24:27.616173   71330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0811 23:24:27.637158   71330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:24:27.659075   71330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0811 23:24:27.680203   71330 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0811 23:24:27.684840   71330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:24:27.698201   71330 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155 for IP: 192.168.58.2
	I0811 23:24:27.698233   71330 certs.go:190] acquiring lock for shared ca certs: {Name:mk92ef0e52f7a4bf6e55e35fe7431dc846a67439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:27.698364   71330 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key
	I0811 23:24:27.698412   71330 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key
	I0811 23:24:27.698460   71330 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key
	I0811 23:24:27.698475   71330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt with IP's: []
	I0811 23:24:28.357388   71330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt ...
	I0811 23:24:28.357420   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt: {Name:mk3dbd91760c954c9c1572f4e435fe68174869bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:28.357618   71330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key ...
	I0811 23:24:28.357630   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key: {Name:mkbfffcfd30652d528145b0bfa5947a36d277a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:28.357711   71330 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key.cee25041
	I0811 23:24:28.357728   71330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0811 23:24:28.949808   71330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt.cee25041 ...
	I0811 23:24:28.949841   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt.cee25041: {Name:mkf89637d10554b82c765a70179ee6e9b5c142f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:28.950034   71330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key.cee25041 ...
	I0811 23:24:28.950047   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key.cee25041: {Name:mkb661585dc8c8bee4ec31f6b32cb3b7198139de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:28.950131   71330 certs.go:337] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt
	I0811 23:24:28.950211   71330 certs.go:341] copying /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key
	I0811 23:24:28.950271   71330 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.key
	I0811 23:24:28.950287   71330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.crt with IP's: []
	I0811 23:24:29.130812   71330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.crt ...
	I0811 23:24:29.130839   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.crt: {Name:mkc4ac399599d8d05dcbd3d2b5befe84f1d58f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:29.131025   71330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.key ...
	I0811 23:24:29.131037   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.key: {Name:mk61ed493aec4c06fc3b7129ff17f976b144a347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:29.131125   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0811 23:24:29.131146   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0811 23:24:29.131162   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0811 23:24:29.131176   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0811 23:24:29.131187   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:24:29.131203   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:24:29.131218   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:24:29.131233   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:24:29.131288   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem (1338 bytes)
	W0811 23:24:29.131326   71330 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634_empty.pem, impossibly tiny 0 bytes
	I0811 23:24:29.131341   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 23:24:29.131366   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem (1082 bytes)
	I0811 23:24:29.131395   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:24:29.131428   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem (1675 bytes)
	I0811 23:24:29.131484   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:24:29.131515   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem -> /usr/share/ca-certificates/7634.pem
	I0811 23:24:29.131531   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /usr/share/ca-certificates/76342.pem
	I0811 23:24:29.131542   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:24:29.132104   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:24:29.160223   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0811 23:24:29.188182   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:24:29.216366   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0811 23:24:29.244785   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:24:29.273710   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0811 23:24:29.302742   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:24:29.330833   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 23:24:29.359903   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem --> /usr/share/ca-certificates/7634.pem (1338 bytes)
	I0811 23:24:29.387847   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /usr/share/ca-certificates/76342.pem (1708 bytes)
	I0811 23:24:29.415826   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:24:29.444408   71330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:24:29.465995   71330 ssh_runner.go:195] Run: openssl version
	I0811 23:24:29.472650   71330 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0811 23:24:29.473162   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7634.pem && ln -fs /usr/share/ca-certificates/7634.pem /etc/ssl/certs/7634.pem"
	I0811 23:24:29.484689   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7634.pem
	I0811 23:24:29.489166   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:24:29.489502   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:24:29.489581   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7634.pem
	I0811 23:24:29.497784   71330 command_runner.go:130] > 51391683
	I0811 23:24:29.498227   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7634.pem /etc/ssl/certs/51391683.0"
	I0811 23:24:29.510027   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76342.pem && ln -fs /usr/share/ca-certificates/76342.pem /etc/ssl/certs/76342.pem"
	I0811 23:24:29.521756   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76342.pem
	I0811 23:24:29.526257   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:24:29.526292   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:24:29.526343   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76342.pem
	I0811 23:24:29.534593   71330 command_runner.go:130] > 3ec20f2e
	I0811 23:24:29.535004   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76342.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:24:29.546572   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:24:29.557909   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:24:29.562551   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:24:29.562580   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:24:29.562628   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:24:29.570652   71330 command_runner.go:130] > b5213941
	I0811 23:24:29.571008   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:24:29.582386   71330 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:24:29.586643   71330 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:24:29.586674   71330 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:24:29.586741   71330 kubeadm.go:404] StartCluster: {Name:multinode-891155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:24:29.586838   71330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0811 23:24:29.586894   71330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0811 23:24:29.626528   71330 cri.go:89] found id: ""
	I0811 23:24:29.626596   71330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:24:29.635978   71330 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0811 23:24:29.636001   71330 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0811 23:24:29.636010   71330 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0811 23:24:29.637316   71330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:24:29.647879   71330 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0811 23:24:29.647979   71330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:24:29.658139   71330 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0811 23:24:29.658166   71330 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0811 23:24:29.658177   71330 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0811 23:24:29.658211   71330 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:29.658245   71330 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0811 23:24:29.658294   71330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0811 23:24:29.710854   71330 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0811 23:24:29.710882   71330 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0811 23:24:29.711318   71330 kubeadm.go:322] [preflight] Running pre-flight checks
	I0811 23:24:29.711336   71330 command_runner.go:130] > [preflight] Running pre-flight checks
	I0811 23:24:29.757346   71330 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0811 23:24:29.757406   71330 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 23:24:29.757535   71330 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1040-aws
	I0811 23:24:29.757558   71330 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1040-aws
	I0811 23:24:29.757620   71330 command_runner.go:130] > OS: Linux
	I0811 23:24:29.757646   71330 kubeadm.go:322] OS: Linux
	I0811 23:24:29.757743   71330 kubeadm.go:322] CGROUPS_CPU: enabled
	I0811 23:24:29.757765   71330 command_runner.go:130] > CGROUPS_CPU: enabled
	I0811 23:24:29.757846   71330 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0811 23:24:29.757873   71330 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0811 23:24:29.757969   71330 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0811 23:24:29.757990   71330 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0811 23:24:29.758073   71330 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0811 23:24:29.758103   71330 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0811 23:24:29.758200   71330 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0811 23:24:29.758222   71330 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0811 23:24:29.758310   71330 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0811 23:24:29.758335   71330 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0811 23:24:29.758436   71330 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0811 23:24:29.758459   71330 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0811 23:24:29.758537   71330 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0811 23:24:29.758561   71330 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0811 23:24:29.758655   71330 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0811 23:24:29.758676   71330 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0811 23:24:29.844708   71330 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 23:24:29.844769   71330 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0811 23:24:29.844865   71330 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 23:24:29.844875   71330 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0811 23:24:29.844961   71330 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 23:24:29.844969   71330 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0811 23:24:30.105928   71330 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:24:30.110662   71330 out.go:204]   - Generating certificates and keys ...
	I0811 23:24:30.106020   71330 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0811 23:24:30.110829   71330 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0811 23:24:30.110845   71330 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0811 23:24:30.110903   71330 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0811 23:24:30.110911   71330 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0811 23:24:30.614950   71330 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 23:24:30.614973   71330 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0811 23:24:31.277264   71330 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0811 23:24:31.277293   71330 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0811 23:24:31.605757   71330 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0811 23:24:31.605786   71330 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0811 23:24:32.262140   71330 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0811 23:24:32.262170   71330 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0811 23:24:32.635346   71330 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0811 23:24:32.635369   71330 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0811 23:24:32.635826   71330 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-891155] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0811 23:24:32.635846   71330 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-891155] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0811 23:24:33.631191   71330 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0811 23:24:33.631214   71330 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0811 23:24:33.631674   71330 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-891155] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0811 23:24:33.631693   71330 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-891155] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0811 23:24:34.226248   71330 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 23:24:34.226272   71330 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0811 23:24:34.515147   71330 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 23:24:34.515169   71330 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0811 23:24:34.864517   71330 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0811 23:24:34.864548   71330 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0811 23:24:34.864896   71330 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:24:34.864908   71330 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0811 23:24:36.295026   71330 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:24:36.295049   71330 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0811 23:24:37.208049   71330 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:24:37.208071   71330 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0811 23:24:37.483298   71330 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:24:37.483327   71330 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0811 23:24:38.088925   71330 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:24:38.088948   71330 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0811 23:24:38.099950   71330 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:24:38.099976   71330 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:24:38.100861   71330 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:24:38.100876   71330 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:24:38.101166   71330 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0811 23:24:38.101184   71330 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:24:38.201465   71330 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:24:38.204277   71330 out.go:204]   - Booting up control plane ...
	I0811 23:24:38.201567   71330 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0811 23:24:38.204392   71330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:24:38.204418   71330 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0811 23:24:38.204547   71330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:24:38.204557   71330 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0811 23:24:38.204627   71330 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:24:38.204637   71330 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0811 23:24:38.205347   71330 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:24:38.205364   71330 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0811 23:24:38.208225   71330 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 23:24:38.208247   71330 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0811 23:24:44.711427   71330 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503054 seconds
	I0811 23:24:44.711434   71330 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.503054 seconds
	I0811 23:24:44.711568   71330 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 23:24:44.711577   71330 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0811 23:24:44.730602   71330 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 23:24:44.730631   71330 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0811 23:24:45.258444   71330 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0811 23:24:45.258473   71330 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0811 23:24:45.258645   71330 kubeadm.go:322] [mark-control-plane] Marking the node multinode-891155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0811 23:24:45.258656   71330 command_runner.go:130] > [mark-control-plane] Marking the node multinode-891155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0811 23:24:45.770071   71330 kubeadm.go:322] [bootstrap-token] Using token: r5tloj.fal31miq2gi2vw5s
	I0811 23:24:45.770090   71330 command_runner.go:130] > [bootstrap-token] Using token: r5tloj.fal31miq2gi2vw5s
	I0811 23:24:45.772065   71330 out.go:204]   - Configuring RBAC rules ...
	I0811 23:24:45.772190   71330 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 23:24:45.772201   71330 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0811 23:24:45.777486   71330 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 23:24:45.777511   71330 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0811 23:24:45.788645   71330 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 23:24:45.788654   71330 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0811 23:24:45.792563   71330 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 23:24:45.792591   71330 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0811 23:24:45.796758   71330 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 23:24:45.796780   71330 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0811 23:24:45.802489   71330 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 23:24:45.802516   71330 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0811 23:24:45.816577   71330 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 23:24:45.816597   71330 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0811 23:24:46.044450   71330 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0811 23:24:46.044483   71330 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0811 23:24:46.183086   71330 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0811 23:24:46.183114   71330 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0811 23:24:46.196541   71330 kubeadm.go:322] 
	I0811 23:24:46.196618   71330 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0811 23:24:46.196633   71330 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0811 23:24:46.196639   71330 kubeadm.go:322] 
	I0811 23:24:46.196711   71330 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0811 23:24:46.196720   71330 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0811 23:24:46.196724   71330 kubeadm.go:322] 
	I0811 23:24:46.196748   71330 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0811 23:24:46.196757   71330 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0811 23:24:46.196812   71330 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 23:24:46.196819   71330 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0811 23:24:46.196866   71330 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 23:24:46.196874   71330 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0811 23:24:46.196879   71330 kubeadm.go:322] 
	I0811 23:24:46.196930   71330 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0811 23:24:46.196938   71330 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0811 23:24:46.196942   71330 kubeadm.go:322] 
	I0811 23:24:46.196987   71330 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0811 23:24:46.196995   71330 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0811 23:24:46.196999   71330 kubeadm.go:322] 
	I0811 23:24:46.197048   71330 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0811 23:24:46.197056   71330 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0811 23:24:46.197149   71330 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 23:24:46.197158   71330 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0811 23:24:46.197221   71330 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 23:24:46.197232   71330 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0811 23:24:46.197237   71330 kubeadm.go:322] 
	I0811 23:24:46.197316   71330 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0811 23:24:46.197324   71330 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0811 23:24:46.197395   71330 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0811 23:24:46.197404   71330 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0811 23:24:46.197408   71330 kubeadm.go:322] 
	I0811 23:24:46.197487   71330 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r5tloj.fal31miq2gi2vw5s \
	I0811 23:24:46.197516   71330 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token r5tloj.fal31miq2gi2vw5s \
	I0811 23:24:46.197618   71330 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 \
	I0811 23:24:46.197626   71330 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 \
	I0811 23:24:46.197645   71330 kubeadm.go:322] 	--control-plane 
	I0811 23:24:46.197654   71330 command_runner.go:130] > 	--control-plane 
	I0811 23:24:46.197658   71330 kubeadm.go:322] 
	I0811 23:24:46.197737   71330 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0811 23:24:46.197746   71330 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0811 23:24:46.197751   71330 kubeadm.go:322] 
	I0811 23:24:46.197827   71330 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r5tloj.fal31miq2gi2vw5s \
	I0811 23:24:46.197838   71330 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token r5tloj.fal31miq2gi2vw5s \
	I0811 23:24:46.197932   71330 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 
	I0811 23:24:46.197941   71330 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 
	I0811 23:24:46.202213   71330 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0811 23:24:46.202242   71330 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0811 23:24:46.202343   71330 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:24:46.202353   71330 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:24:46.202367   71330 cni.go:84] Creating CNI manager for ""
	I0811 23:24:46.202387   71330 cni.go:136] 1 nodes found, recommending kindnet
	I0811 23:24:46.205461   71330 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:24:46.207490   71330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:24:46.221747   71330 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:24:46.221773   71330 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0811 23:24:46.221781   71330 command_runner.go:130] > Device: 36h/54d	Inode: 1306623     Links: 1
	I0811 23:24:46.221788   71330 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:24:46.221795   71330 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0811 23:24:46.221801   71330 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0811 23:24:46.221811   71330 command_runner.go:130] > Change: 2023-08-11 23:01:54.534845020 +0000
	I0811 23:24:46.221817   71330 command_runner.go:130] >  Birth: 2023-08-11 23:01:54.490844603 +0000
	I0811 23:24:46.223661   71330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:24:46.223682   71330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:24:46.252964   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:24:47.151932   71330 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0811 23:24:47.158757   71330 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0811 23:24:47.170112   71330 command_runner.go:130] > serviceaccount/kindnet created
	I0811 23:24:47.182463   71330 command_runner.go:130] > daemonset.apps/kindnet created
	I0811 23:24:47.189609   71330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:24:47.189743   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:47.189826   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5 minikube.k8s.io/name=multinode-891155 minikube.k8s.io/updated_at=2023_08_11T23_24_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:47.339127   71330 command_runner.go:130] > node/multinode-891155 labeled
	I0811 23:24:47.343415   71330 command_runner.go:130] > -16
	I0811 23:24:47.352496   71330 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0811 23:24:47.356843   71330 ops.go:34] apiserver oom_adj: -16
	I0811 23:24:47.356926   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:47.472698   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:47.472782   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:47.567665   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:48.068423   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:48.169486   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:48.567871   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:48.658430   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:49.067949   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:49.162273   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:49.567853   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:49.660553   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:50.068124   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:50.161277   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:50.568692   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:50.653231   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:51.068118   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:51.161046   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:51.568499   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:51.664541   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:52.067903   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:52.160963   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:52.567899   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:52.665489   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:53.067998   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:53.154334   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:53.567864   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:53.658810   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:54.068396   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:54.161231   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:54.568819   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:54.666174   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:55.067840   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:55.192247   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:55.568232   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:55.675387   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:56.067900   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:56.161121   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:56.567819   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:56.673443   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:57.067903   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:57.167395   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:57.568008   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:57.666303   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:58.068369   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:58.161607   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:58.568133   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:58.699709   71330 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0811 23:24:59.067884   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0811 23:24:59.165377   71330 command_runner.go:130] > NAME      SECRETS   AGE
	I0811 23:24:59.165518   71330 command_runner.go:130] > default   0         0s
	I0811 23:24:59.170030   71330 kubeadm.go:1081] duration metric: took 11.98034143s to wait for elevateKubeSystemPrivileges.
	I0811 23:24:59.170053   71330 kubeadm.go:406] StartCluster complete in 29.583315778s
	I0811 23:24:59.170069   71330 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:59.170130   71330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:24:59.170748   71330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:24:59.171249   71330 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:24:59.171520   71330 kapi.go:59] client config for multinode-891155: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:24:59.172658   71330 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:24:59.172668   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.172678   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.172685   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.172897   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:24:59.173283   71330 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:24:59.173341   71330 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:24:59.173415   71330 addons.go:69] Setting storage-provisioner=true in profile "multinode-891155"
	I0811 23:24:59.173440   71330 addons.go:231] Setting addon storage-provisioner=true in "multinode-891155"
	I0811 23:24:59.173473   71330 cert_rotation.go:137] Starting client certificate rotation controller
	I0811 23:24:59.173506   71330 host.go:66] Checking if "multinode-891155" exists ...
	I0811 23:24:59.173513   71330 addons.go:69] Setting default-storageclass=true in profile "multinode-891155"
	I0811 23:24:59.173608   71330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-891155"
	I0811 23:24:59.173986   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:59.174474   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:59.201332   71330 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0811 23:24:59.201353   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.201362   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.201368   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.201375   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.201381   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.201388   71330 round_trippers.go:580]     Content-Length: 291
	I0811 23:24:59.201395   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.201401   71330 round_trippers.go:580]     Audit-Id: 0c7f6813-8664-4a44-8772-ec3ff50f8fbb
	I0811 23:24:59.201428   71330 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4304ada0-0fa2-48c9-be07-67a3612f0ddd","resourceVersion":"259","creationTimestamp":"2023-08-11T23:24:46Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:59.202017   71330 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4304ada0-0fa2-48c9-be07-67a3612f0ddd","resourceVersion":"259","creationTimestamp":"2023-08-11T23:24:46Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:59.202103   71330 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:24:59.202113   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.202144   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.202155   71330 round_trippers.go:473]     Content-Type: application/json
	I0811 23:24:59.202162   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.218573   71330 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0811 23:24:59.218602   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.218612   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.218619   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.218627   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.218645   71330 round_trippers.go:580]     Content-Length: 291
	I0811 23:24:59.218672   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.218679   71330 round_trippers.go:580]     Audit-Id: b31c6d1a-0bfd-4faf-b83b-e73e5febc773
	I0811 23:24:59.218690   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.218713   71330 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4304ada0-0fa2-48c9-be07-67a3612f0ddd","resourceVersion":"335","creationTimestamp":"2023-08-11T23:24:46Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:59.218851   71330 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:24:59.218860   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.218867   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.218874   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.220669   71330 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:24:59.220944   71330 kapi.go:59] client config for multinode-891155: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:24:59.221370   71330 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0811 23:24:59.221381   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.221391   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.221398   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.233589   71330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0811 23:24:59.232267   71330 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0811 23:24:59.232655   71330 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0811 23:24:59.235211   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.235221   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.235228   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.235240   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.235247   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.235254   71330 round_trippers.go:580]     Content-Length: 291
	I0811 23:24:59.235261   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.235267   71330 round_trippers.go:580]     Audit-Id: 3f1934a7-b8e6-4207-82eb-681f0ffafb4d
	I0811 23:24:59.235291   71330 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4304ada0-0fa2-48c9-be07-67a3612f0ddd","resourceVersion":"335","creationTimestamp":"2023-08-11T23:24:46Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0811 23:24:59.235376   71330 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-891155" context rescaled to 1 replicas
	I0811 23:24:59.235400   71330 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:24:59.237192   71330 out.go:177] * Verifying Kubernetes components...
	I0811 23:24:59.235604   71330 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:24:59.235613   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.238710   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.238719   71330 round_trippers.go:580]     Content-Length: 109
	I0811 23:24:59.238726   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.238733   71330 round_trippers.go:580]     Audit-Id: 52f50f84-aeb0-4f2b-9375-5f36862d261b
	I0811 23:24:59.238740   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.238747   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.238754   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.238774   71330 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"335"},"items":[]}
	I0811 23:24:59.239019   71330 addons.go:231] Setting addon default-storageclass=true in "multinode-891155"
	I0811 23:24:59.239046   71330 host.go:66] Checking if "multinode-891155" exists ...
	I0811 23:24:59.239492   71330 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:24:59.239690   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:24:59.239884   71330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0811 23:24:59.239947   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:59.303922   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:59.306364   71330 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0811 23:24:59.306385   71330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0811 23:24:59.306448   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:24:59.330492   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:24:59.400382   71330 command_runner.go:130] > apiVersion: v1
	I0811 23:24:59.400397   71330 command_runner.go:130] > data:
	I0811 23:24:59.400403   71330 command_runner.go:130] >   Corefile: |
	I0811 23:24:59.400408   71330 command_runner.go:130] >     .:53 {
	I0811 23:24:59.400413   71330 command_runner.go:130] >         errors
	I0811 23:24:59.400418   71330 command_runner.go:130] >         health {
	I0811 23:24:59.400423   71330 command_runner.go:130] >            lameduck 5s
	I0811 23:24:59.400428   71330 command_runner.go:130] >         }
	I0811 23:24:59.400432   71330 command_runner.go:130] >         ready
	I0811 23:24:59.400440   71330 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0811 23:24:59.400445   71330 command_runner.go:130] >            pods insecure
	I0811 23:24:59.400451   71330 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0811 23:24:59.400459   71330 command_runner.go:130] >            ttl 30
	I0811 23:24:59.400464   71330 command_runner.go:130] >         }
	I0811 23:24:59.400469   71330 command_runner.go:130] >         prometheus :9153
	I0811 23:24:59.400474   71330 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0811 23:24:59.400480   71330 command_runner.go:130] >            max_concurrent 1000
	I0811 23:24:59.400485   71330 command_runner.go:130] >         }
	I0811 23:24:59.400490   71330 command_runner.go:130] >         cache 30
	I0811 23:24:59.400495   71330 command_runner.go:130] >         loop
	I0811 23:24:59.400499   71330 command_runner.go:130] >         reload
	I0811 23:24:59.400504   71330 command_runner.go:130] >         loadbalance
	I0811 23:24:59.400508   71330 command_runner.go:130] >     }
	I0811 23:24:59.400513   71330 command_runner.go:130] > kind: ConfigMap
	I0811 23:24:59.400517   71330 command_runner.go:130] > metadata:
	I0811 23:24:59.400525   71330 command_runner.go:130] >   creationTimestamp: "2023-08-11T23:24:45Z"
	I0811 23:24:59.400530   71330 command_runner.go:130] >   name: coredns
	I0811 23:24:59.400535   71330 command_runner.go:130] >   namespace: kube-system
	I0811 23:24:59.400540   71330 command_runner.go:130] >   resourceVersion: "255"
	I0811 23:24:59.400546   71330 command_runner.go:130] >   uid: cdc29067-5eb8-450e-b511-379ffffb2098
	I0811 23:24:59.406697   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0811 23:24:59.406977   71330 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:24:59.407260   71330 kapi.go:59] client config for multinode-891155: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:24:59.408512   71330 node_ready.go:35] waiting up to 6m0s for node "multinode-891155" to be "Ready" ...
	I0811 23:24:59.408645   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:24:59.408678   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.408702   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.408724   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.413955   71330 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0811 23:24:59.413981   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.413991   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.414013   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.414043   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.414056   71330 round_trippers.go:580]     Audit-Id: 43436da1-d648-45db-a079-9b9c000c722d
	I0811 23:24:59.414063   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.414070   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.414921   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"329","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:2
4:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6119 chars]
	I0811 23:24:59.415693   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:24:59.415736   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.415760   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.415789   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.433171   71330 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0811 23:24:59.433204   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.433230   71330 round_trippers.go:580]     Audit-Id: a2383379-9f65-4097-b96a-96ad10a71cfc
	I0811 23:24:59.433245   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.433252   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.433259   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.433283   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.433299   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.433758   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"329","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:2
4:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6119 chars]
	I0811 23:24:59.508139   71330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0811 23:24:59.521258   71330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0811 23:24:59.934828   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:24:59.934846   71330 round_trippers.go:469] Request Headers:
	I0811 23:24:59.934856   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:24:59.934863   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:24:59.957641   71330 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0811 23:24:59.957664   71330 round_trippers.go:577] Response Headers:
	I0811 23:24:59.957697   71330 round_trippers.go:580]     Audit-Id: c5c7a6a6-49fd-4d66-a391-df94ced676b0
	I0811 23:24:59.957711   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:24:59.957719   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:24:59.957726   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:24:59.957742   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:24:59.957769   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:24:59 GMT
	I0811 23:24:59.966793   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:00.146315   71330 command_runner.go:130] > configmap/coredns replaced
	I0811 23:25:00.152560   71330 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0811 23:25:00.259226   71330 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0811 23:25:00.434600   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:00.434657   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:00.434684   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:00.434707   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:00.444576   71330 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0811 23:25:00.444641   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:00.444665   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:00.444688   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:00 GMT
	I0811 23:25:00.444712   71330 round_trippers.go:580]     Audit-Id: 4f77a89f-f6c1-4434-bb1c-342cb9bcf3e3
	I0811 23:25:00.444734   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:00.444767   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:00.444793   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:00.445752   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:00.476226   71330 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0811 23:25:00.476289   71330 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0811 23:25:00.476318   71330 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 23:25:00.476341   71330 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0811 23:25:00.476362   71330 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0811 23:25:00.476393   71330 command_runner.go:130] > pod/storage-provisioner created
	I0811 23:25:00.478697   71330 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0811 23:25:00.480852   71330 addons.go:502] enable addons completed in 1.307488705s: enabled=[default-storageclass storage-provisioner]
	I0811 23:25:00.935360   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:00.935416   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:00.935450   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:00.935470   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:00.938517   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:00.938584   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:00.938599   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:00.938607   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:00 GMT
	I0811 23:25:00.938615   71330 round_trippers.go:580]     Audit-Id: d9a736e0-2638-4d9e-8e53-ea88ea30156b
	I0811 23:25:00.938621   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:00.938628   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:00.938639   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:00.939013   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:01.434710   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:01.434739   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:01.434750   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:01.434758   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:01.437471   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:01.437545   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:01.437561   71330 round_trippers.go:580]     Audit-Id: 01f6a50c-02f0-4307-a576-9ae2784546f0
	I0811 23:25:01.437572   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:01.437579   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:01.437586   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:01.437593   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:01.437616   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:01 GMT
	I0811 23:25:01.437761   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:01.438173   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:01.935287   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:01.935332   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:01.935343   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:01.935350   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:01.938159   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:01.938195   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:01.938210   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:01 GMT
	I0811 23:25:01.938217   71330 round_trippers.go:580]     Audit-Id: 635b30b5-5a8a-42e5-a7ec-69d31e27e504
	I0811 23:25:01.938224   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:01.938235   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:01.938246   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:01.938253   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:01.938393   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:02.434485   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:02.434519   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:02.434529   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:02.434536   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:02.438120   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:02.438147   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:02.438157   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:02.438168   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:02.438176   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:02 GMT
	I0811 23:25:02.438183   71330 round_trippers.go:580]     Audit-Id: aa0e374a-78f2-47dd-a584-2553b40b1c3b
	I0811 23:25:02.438194   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:02.438201   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:02.438328   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:02.934919   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:02.934950   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:02.934960   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:02.934968   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:02.937506   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:02.937617   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:02.937651   71330 round_trippers.go:580]     Audit-Id: e3df125d-34da-45b7-9961-8bff8590718b
	I0811 23:25:02.937660   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:02.937678   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:02.937693   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:02.937700   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:02.937708   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:02 GMT
	I0811 23:25:02.937823   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:03.435017   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:03.435057   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:03.435068   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:03.435104   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:03.437651   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:03.437717   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:03.437738   71330 round_trippers.go:580]     Audit-Id: f86a0fa7-6979-4b89-a942-246aab165536
	I0811 23:25:03.437760   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:03.437845   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:03.437864   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:03.437872   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:03.437879   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:03 GMT
	I0811 23:25:03.438067   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:03.438470   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:03.934497   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:03.934522   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:03.934533   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:03.934540   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:03.937194   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:03.937216   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:03.937225   71330 round_trippers.go:580]     Audit-Id: ce7b1116-fdab-403e-9828-73a3f1515e88
	I0811 23:25:03.937232   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:03.937240   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:03.937247   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:03.937253   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:03.937263   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:03 GMT
	I0811 23:25:03.937867   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:04.434950   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:04.435040   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:04.435055   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:04.435064   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:04.437671   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:04.437692   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:04.437701   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:04.437709   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:04 GMT
	I0811 23:25:04.437716   71330 round_trippers.go:580]     Audit-Id: 3a043346-eab8-4db5-b0f3-92459318305c
	I0811 23:25:04.437723   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:04.437729   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:04.437735   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:04.437860   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:04.935117   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:04.935146   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:04.935156   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:04.935164   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:04.937644   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:04.937665   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:04.937673   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:04.937680   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:04 GMT
	I0811 23:25:04.937687   71330 round_trippers.go:580]     Audit-Id: f3d82e2f-d5f7-4f6e-8552-bef60886ea5f
	I0811 23:25:04.937694   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:04.937700   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:04.937707   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:04.937824   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:05.434720   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:05.434745   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:05.434755   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:05.434763   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:05.437704   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:05.437726   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:05.437734   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:05.437741   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:05.437748   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:05 GMT
	I0811 23:25:05.437755   71330 round_trippers.go:580]     Audit-Id: fdb2e905-b602-4aab-b0c8-17e0c4630ba3
	I0811 23:25:05.437763   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:05.437769   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:05.437897   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:05.935020   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:05.935042   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:05.935057   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:05.935066   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:05.937799   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:05.937825   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:05.937833   71330 round_trippers.go:580]     Audit-Id: f90ea64e-9987-498b-ad3d-a4468b15134b
	I0811 23:25:05.937840   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:05.937846   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:05.937853   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:05.937861   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:05.937867   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:05 GMT
	I0811 23:25:05.937972   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:05.938359   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:06.435233   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:06.435258   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:06.435268   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:06.435276   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:06.437946   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:06.437980   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:06.437989   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:06.437996   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:06 GMT
	I0811 23:25:06.438003   71330 round_trippers.go:580]     Audit-Id: 31cd0dc4-9104-4cf1-bd81-0541a347cf96
	I0811 23:25:06.438010   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:06.438016   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:06.438023   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:06.438149   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:06.935334   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:06.935358   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:06.935368   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:06.935375   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:06.938115   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:06.938143   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:06.938153   71330 round_trippers.go:580]     Audit-Id: 6a532e1e-7c84-4a8f-87e1-1157f542c895
	I0811 23:25:06.938160   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:06.938167   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:06.938174   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:06.938182   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:06.938191   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:06 GMT
	I0811 23:25:06.938306   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:07.434996   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:07.435020   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:07.435037   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.435045   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:07.437864   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.437934   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:07.437956   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.437975   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.438010   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:07.438028   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:07.438036   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.438042   71330 round_trippers.go:580]     Audit-Id: 813d6530-aaea-4bf7-801f-df85cb48bf7b
	I0811 23:25:07.438201   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:07.934386   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:07.934412   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:07.934422   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:07.934430   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:07.936962   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:07.936985   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:07.936993   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:07.937000   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:07.937007   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:07.937015   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:07 GMT
	I0811 23:25:07.937027   71330 round_trippers.go:580]     Audit-Id: fac40ce9-950d-494f-be7c-c87f1b0a90d5
	I0811 23:25:07.937035   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:07.937144   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:08.434615   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:08.434639   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:08.434649   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.434657   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:08.437601   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:08.437637   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:08.437652   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.437659   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.437669   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:08.437678   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:08.437699   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.437709   71330 round_trippers.go:580]     Audit-Id: b8dbbdee-7672-40d8-854f-a3ee26d59f89
	I0811 23:25:08.437991   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:08.438464   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:08.935029   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:08.935051   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:08.935061   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:08.935069   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:08.937671   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:08.937696   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:08.937705   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:08.937712   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:08.937719   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:08.937725   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:08.937732   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:08 GMT
	I0811 23:25:08.937739   71330 round_trippers.go:580]     Audit-Id: 4f72eff6-3056-44c0-952c-a38021f5e67b
	I0811 23:25:08.937846   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:09.435071   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:09.435098   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:09.435109   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.435116   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:09.437971   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:09.437992   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:09.438001   71330 round_trippers.go:580]     Audit-Id: 442debe9-9a86-429d-ab6f-5b6249abce36
	I0811 23:25:09.438008   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.438014   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.438021   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:09.438029   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:09.438036   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.438181   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:09.935337   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:09.935360   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:09.935370   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:09.935380   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:09.937963   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:09.937990   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:09.937999   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:09.938006   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:09.938013   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:09.938020   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:09 GMT
	I0811 23:25:09.938027   71330 round_trippers.go:580]     Audit-Id: 0b82b0f1-9a1c-4444-8661-842def312df8
	I0811 23:25:09.938036   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:09.938136   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:10.434569   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:10.434593   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:10.434603   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.434612   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:10.437042   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:10.437067   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:10.437075   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.437097   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.437106   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:10.437112   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:10.437120   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.437132   71330 round_trippers.go:580]     Audit-Id: 5f7ad08a-fd1f-4c81-8be0-630c2ec91bc3
	I0811 23:25:10.437260   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:10.934345   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:10.934367   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:10.934377   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:10.934385   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:10.936916   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:10.936941   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:10.936950   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:10.936957   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:10.936964   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:10.936970   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:10 GMT
	I0811 23:25:10.936977   71330 round_trippers.go:580]     Audit-Id: 663b62b4-32ee-403c-b698-b0deb712c301
	I0811 23:25:10.936984   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:10.937070   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:10.937482   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:11.435217   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:11.435240   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:11.435253   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.435261   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:11.437779   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.437799   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:11.437808   71330 round_trippers.go:580]     Audit-Id: 57a5fb0c-ffaf-4e13-aba2-c95e99bf0ad2
	I0811 23:25:11.437814   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.437823   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.437829   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:11.437836   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:11.437843   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.437971   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:11.934290   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:11.934315   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:11.934325   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:11.934334   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:11.936994   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:11.937031   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:11.937041   71330 round_trippers.go:580]     Audit-Id: 83009d70-f888-4bdc-9125-6f7bcd1cf913
	I0811 23:25:11.937049   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:11.937056   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:11.937063   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:11.937073   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:11.937110   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:11 GMT
	I0811 23:25:11.937566   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:12.435242   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:12.435270   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:12.435280   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.435288   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:12.437941   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:12.437982   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:12.437991   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:12.437998   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.438022   71330 round_trippers.go:580]     Audit-Id: 14f00f4e-eae3-41fe-b7f7-bdeee14ec726
	I0811 23:25:12.438033   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.438040   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.438052   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:12.438257   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:12.934311   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:12.934335   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:12.934346   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:12.934353   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:12.937035   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:12.937059   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:12.937069   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:12.937076   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:12.937099   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:12 GMT
	I0811 23:25:12.937111   71330 round_trippers.go:580]     Audit-Id: ac6e3b0e-dc82-43dd-9502-3939d349a282
	I0811 23:25:12.937117   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:12.937124   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:12.937589   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:12.937990   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:13.434976   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:13.435000   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:13.435010   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.435018   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:13.437698   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.437722   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:13.437734   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:13.437741   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.437748   71330 round_trippers.go:580]     Audit-Id: 48247600-f48a-4eb8-b682-3fbc588598b9
	I0811 23:25:13.437755   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.437761   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.437767   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:13.437933   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:13.934426   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:13.934450   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:13.934460   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:13.934468   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:13.936961   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:13.936980   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:13.936990   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:13 GMT
	I0811 23:25:13.936996   71330 round_trippers.go:580]     Audit-Id: 91d576bc-148c-4687-b1e5-784f0e735b7d
	I0811 23:25:13.937003   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:13.937010   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:13.937016   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:13.937023   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:13.937409   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:14.434688   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:14.434712   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:14.434723   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.434730   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:14.437233   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:14.437259   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:14.437267   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.437275   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:14.437281   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:14.437288   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.437295   71330 round_trippers.go:580]     Audit-Id: 4a04aa43-526a-4510-b938-6d6cd2560a57
	I0811 23:25:14.437304   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.437489   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:14.934534   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:14.934553   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:14.934563   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:14.934570   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:14.937104   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:14.937127   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:14.937136   71330 round_trippers.go:580]     Audit-Id: 8dffc62e-7cce-4f2f-af43-5d163f0617e8
	I0811 23:25:14.937143   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:14.937150   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:14.937156   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:14.937163   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:14.937173   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:14 GMT
	I0811 23:25:14.937541   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:14.938341   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:15.434600   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:15.434624   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:15.434691   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.434706   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:15.437243   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.437263   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:15.437272   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.437279   71330 round_trippers.go:580]     Audit-Id: 4586c6c6-990b-4dcd-9da7-72c75b92a6dc
	I0811 23:25:15.437285   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.437292   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.437298   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:15.437306   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:15.437454   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:15.934715   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:15.934739   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:15.934749   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:15.934757   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:15.937731   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:15.937755   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:15.937763   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:15.937770   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:15 GMT
	I0811 23:25:15.937777   71330 round_trippers.go:580]     Audit-Id: a5a72472-635e-41d0-94a7-6db341a85c7f
	I0811 23:25:15.937784   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:15.937791   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:15.937797   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:15.937964   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:16.434533   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:16.434556   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:16.434567   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:16.434575   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.437060   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:16.437118   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:16.437127   71330 round_trippers.go:580]     Audit-Id: 1bfd9919-037d-4d49-95cb-066bb7e1f8eb
	I0811 23:25:16.437134   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.437141   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.437150   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:16.437159   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:16.437171   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.437499   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:16.934550   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:16.934574   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:16.934584   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:16.934592   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:16.937014   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:16.937034   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:16.937043   71330 round_trippers.go:580]     Audit-Id: e974bc6d-67dc-4809-94ec-ca70146321d2
	I0811 23:25:16.937050   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:16.937056   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:16.937062   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:16.937069   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:16.937075   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:16 GMT
	I0811 23:25:16.937254   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:17.434814   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:17.434839   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:17.434848   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.434857   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:17.437465   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.437489   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:17.437498   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.437505   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:17.437512   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:17.437519   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.437525   71330 round_trippers.go:580]     Audit-Id: a93a574f-a3a1-447f-851a-d39c0b51ba4a
	I0811 23:25:17.437532   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.437671   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:17.438058   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:17.934386   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:17.934410   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:17.934421   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:17.934428   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:17.937182   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:17.937205   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:17.937213   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:17.937220   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:17.937227   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:17.937251   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:17.937264   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:17 GMT
	I0811 23:25:17.937271   71330 round_trippers.go:580]     Audit-Id: 95b02bc5-da80-4ec4-83c9-66a4c81a7e47
	I0811 23:25:17.937374   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:18.434945   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:18.434970   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:18.434980   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:18.434987   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:18.437420   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:18.437468   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:18.437489   71330 round_trippers.go:580]     Audit-Id: 6a3a017c-9874-4ea1-8a99-66700dcb884a
	I0811 23:25:18.437497   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:18.437505   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:18.437514   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:18.437521   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:18.437528   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:18 GMT
	I0811 23:25:18.437645   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:18.935194   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:18.935220   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:18.935230   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:18.935239   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:18.937763   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:18.937786   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:18.937794   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:18 GMT
	I0811 23:25:18.937801   71330 round_trippers.go:580]     Audit-Id: ad09cd70-7b48-4286-8e8f-b8455c82e5a7
	I0811 23:25:18.937808   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:18.937814   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:18.937821   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:18.937828   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:18.937978   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:19.435300   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:19.435326   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:19.435336   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:19.435343   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:19.437998   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:19.438019   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:19.438028   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:19.438035   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:19.438042   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:19.438049   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:19 GMT
	I0811 23:25:19.438055   71330 round_trippers.go:580]     Audit-Id: 3c99647b-7b47-4d31-a12f-38bba4ebfa32
	I0811 23:25:19.438062   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:19.438184   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:19.438568   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:19.934719   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:19.934741   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:19.934751   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:19.934759   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:19.937742   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:19.937768   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:19.937777   71330 round_trippers.go:580]     Audit-Id: 1f6e99f5-9a31-4446-9c0d-2bdf8aeb2630
	I0811 23:25:19.937784   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:19.937791   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:19.937797   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:19.937804   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:19.937811   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:19 GMT
	I0811 23:25:19.937957   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:20.435311   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:20.435336   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:20.435346   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:20.435353   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:20.438275   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:20.438300   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:20.438310   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:20.438317   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:20 GMT
	I0811 23:25:20.438324   71330 round_trippers.go:580]     Audit-Id: 5b3a30e6-daa4-4dc3-bf97-cec9f88aae66
	I0811 23:25:20.438330   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:20.438337   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:20.438343   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:20.438518   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:20.935142   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:20.935183   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:20.935192   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:20.935200   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:20.937611   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:20.937636   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:20.937646   71330 round_trippers.go:580]     Audit-Id: a840e3fe-a03d-4777-846b-4b30d048484a
	I0811 23:25:20.937653   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:20.937660   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:20.937670   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:20.937687   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:20.937695   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:20 GMT
	I0811 23:25:20.937905   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:21.435091   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:21.435119   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:21.435129   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:21.435137   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:21.437866   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:21.437888   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:21.437897   71330 round_trippers.go:580]     Audit-Id: fae6ce60-abef-480c-9be0-e2a88cca95ce
	I0811 23:25:21.437904   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:21.437911   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:21.437917   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:21.437925   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:21.437932   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:21 GMT
	I0811 23:25:21.438066   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:21.935335   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:21.935375   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:21.935385   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:21.935392   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:21.938064   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:21.938091   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:21.938099   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:21.938107   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:21 GMT
	I0811 23:25:21.938114   71330 round_trippers.go:580]     Audit-Id: 0448d86f-c37c-4067-a17d-106554e478a4
	I0811 23:25:21.938121   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:21.938128   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:21.938137   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:21.938253   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:21.938650   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:22.434779   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:22.434803   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:22.434818   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:22.434826   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:22.437919   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:22.437945   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:22.437954   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:22.437961   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:22.437968   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:22.437975   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:22.437982   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:22 GMT
	I0811 23:25:22.437989   71330 round_trippers.go:580]     Audit-Id: efc26a1d-c9bc-435a-85c3-d3544930715a
	I0811 23:25:22.438112   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:22.935078   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:22.935100   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:22.935111   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:22.935119   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:22.937702   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:22.937734   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:22.937744   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:22.937752   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:22.937763   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:22.937773   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:22 GMT
	I0811 23:25:22.937788   71330 round_trippers.go:580]     Audit-Id: d9371ce6-2409-4d97-a0f5-a15458ca38e2
	I0811 23:25:22.937795   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:22.938132   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:23.434610   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:23.434635   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:23.434644   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:23.434652   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:23.437357   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:23.437378   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:23.437387   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:23.437395   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:23.437402   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:23.437409   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:23 GMT
	I0811 23:25:23.437416   71330 round_trippers.go:580]     Audit-Id: e0dd135b-61cd-43cd-b109-096124cb3f19
	I0811 23:25:23.437432   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:23.437770   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:23.934906   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:23.934928   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:23.934938   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:23.934946   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:23.937555   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:23.937576   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:23.937584   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:23.937592   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:23.937598   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:23.937606   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:23 GMT
	I0811 23:25:23.937613   71330 round_trippers.go:580]     Audit-Id: 40baa5af-0c93-45db-9ca9-5dccb74655e1
	I0811 23:25:23.937620   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:23.937702   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:24.435372   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:24.435400   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:24.435410   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:24.435421   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:24.438046   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:24.438073   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:24.438082   71330 round_trippers.go:580]     Audit-Id: 9eb8f369-8064-4ded-aff8-d27dbc7ce1c6
	I0811 23:25:24.438089   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:24.438095   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:24.438102   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:24.438109   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:24.438120   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:24 GMT
	I0811 23:25:24.438248   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:24.438630   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:24.934319   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:24.934343   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:24.934353   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:24.934361   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:24.936922   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:24.936947   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:24.936956   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:24.936963   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:24.936970   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:24.936978   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:24 GMT
	I0811 23:25:24.936988   71330 round_trippers.go:580]     Audit-Id: 351ab6a7-e4cf-44d2-be5a-575cdbbcb599
	I0811 23:25:24.936995   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:24.937211   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:25.435178   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:25.435212   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:25.435221   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:25.435229   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:25.438024   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:25.438045   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:25.438054   71330 round_trippers.go:580]     Audit-Id: f557f7ab-58c3-49fe-ba79-d5d3026e9d5d
	I0811 23:25:25.438061   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:25.438068   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:25.438074   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:25.438081   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:25.438088   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:25 GMT
	I0811 23:25:25.438240   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:25.935142   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:25.935165   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:25.935175   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:25.935183   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:25.937685   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:25.937712   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:25.937722   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:25.937729   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:25.937759   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:25 GMT
	I0811 23:25:25.937775   71330 round_trippers.go:580]     Audit-Id: 4e329425-7579-4048-81ab-2979a2e710bd
	I0811 23:25:25.937781   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:25.937788   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:25.938058   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:26.434387   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:26.434413   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:26.434423   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:26.434431   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:26.437121   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:26.437142   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:26.437151   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:26.437158   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:26.437166   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:26 GMT
	I0811 23:25:26.437173   71330 round_trippers.go:580]     Audit-Id: 17469a84-7ac7-49f7-a60b-2d18a99c4d8c
	I0811 23:25:26.437179   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:26.437186   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:26.437319   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:26.934338   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:26.934362   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:26.934372   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:26.934379   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:26.936909   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:26.936929   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:26.936938   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:26.936945   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:26 GMT
	I0811 23:25:26.936952   71330 round_trippers.go:580]     Audit-Id: ab364619-c732-43bf-91cc-b16ee327180f
	I0811 23:25:26.936958   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:26.936965   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:26.936971   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:26.937114   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:26.937502   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:27.435295   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:27.435320   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:27.435330   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:27.435337   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:27.438126   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:27.438149   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:27.438158   71330 round_trippers.go:580]     Audit-Id: cf8960a5-336e-42d6-ad84-2be00d66ce84
	I0811 23:25:27.438165   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:27.438171   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:27.438179   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:27.438185   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:27.438193   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:27 GMT
	I0811 23:25:27.438305   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:27.934447   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:27.934470   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:27.934481   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:27.934488   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:27.937488   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:27.937513   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:27.937522   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:27.937529   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:27.937537   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:27.937545   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:27 GMT
	I0811 23:25:27.937552   71330 round_trippers.go:580]     Audit-Id: 4da4007d-6d4d-40b1-9ee4-beb28de35f69
	I0811 23:25:27.937559   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:27.937885   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:28.434508   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:28.434532   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:28.434543   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:28.434550   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:28.437102   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:28.437123   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:28.437132   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:28.437139   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:28 GMT
	I0811 23:25:28.437146   71330 round_trippers.go:580]     Audit-Id: 4b1a9a05-299b-4274-b11e-7393c0bcf1cd
	I0811 23:25:28.437152   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:28.437159   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:28.437165   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:28.437303   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:28.934363   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:28.934387   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:28.934400   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:28.934408   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:28.936945   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:28.936968   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:28.936978   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:28.936985   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:28.936991   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:28.936998   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:28.937005   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:28 GMT
	I0811 23:25:28.937012   71330 round_trippers.go:580]     Audit-Id: 392ac84f-55e9-4c96-a384-b174c45759ce
	I0811 23:25:28.937123   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:28.937520   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:29.435154   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:29.435179   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:29.435189   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:29.435197   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:29.437949   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:29.437971   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:29.437980   71330 round_trippers.go:580]     Audit-Id: a168a346-00cf-4641-a3cc-595d5a2eaae5
	I0811 23:25:29.437987   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:29.437994   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:29.438001   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:29.438008   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:29.438015   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:29 GMT
	I0811 23:25:29.438122   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:29.934637   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:29.934663   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:29.934673   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:29.934680   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:29.937312   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:29.937337   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:29.937346   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:29.937353   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:29.937360   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:29.937367   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:29.937375   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:29 GMT
	I0811 23:25:29.937382   71330 round_trippers.go:580]     Audit-Id: cff031e7-a9e6-4660-99b3-8cb1e6d0d8ed
	I0811 23:25:29.937677   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:30.434382   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:30.434406   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:30.434417   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:30.434425   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:30.436892   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:30.436912   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:30.436921   71330 round_trippers.go:580]     Audit-Id: 6c67b663-16a1-4e52-9084-7a3b599bb6de
	I0811 23:25:30.436928   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:30.436935   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:30.436941   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:30.436948   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:30.436955   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:30 GMT
	I0811 23:25:30.437107   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:30.935305   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:30.935330   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:30.935341   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:30.935348   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:30.938004   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:30.938031   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:30.938040   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:30.938047   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:30.938054   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:30.938060   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:30.938067   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:30 GMT
	I0811 23:25:30.938074   71330 round_trippers.go:580]     Audit-Id: 05f384d5-da5a-4a05-a28c-4d915b99b1fe
	I0811 23:25:30.938177   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"347","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0811 23:25:30.938567   71330 node_ready.go:58] node "multinode-891155" has status "Ready":"False"
	I0811 23:25:31.435282   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:31.435307   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.435317   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.435324   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.437808   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.437831   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.437840   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.437847   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.437854   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.437865   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.437872   71330 round_trippers.go:580]     Audit-Id: 28b7865c-31ee-4dc1-a722-bae80da81090
	I0811 23:25:31.437886   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.438190   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:31.438578   71330 node_ready.go:49] node "multinode-891155" has status "Ready":"True"
	I0811 23:25:31.438596   71330 node_ready.go:38] duration metric: took 32.030027864s waiting for node "multinode-891155" to be "Ready" ...
	I0811 23:25:31.438607   71330 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:31.438674   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:31.438685   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.438693   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.438706   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.442274   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:31.442299   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.442307   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.442315   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.442321   71330 round_trippers.go:580]     Audit-Id: d90a01bc-867c-47d0-be9a-aed7667bb993
	I0811 23:25:31.442330   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.442336   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.442346   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.442681   71330 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"427","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0811 23:25:31.446649   71330 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:31.446739   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-2zwtc
	I0811 23:25:31.446750   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.446760   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.446767   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.449259   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.449307   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.449318   71330 round_trippers.go:580]     Audit-Id: ba67dbe4-7dc6-4b3d-8a8a-84fa125f71de
	I0811 23:25:31.449326   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.449335   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.449348   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.449355   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.449365   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.449454   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"427","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0811 23:25:31.449918   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:31.449931   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.449939   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.449947   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.452308   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.452326   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.452335   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.452342   71330 round_trippers.go:580]     Audit-Id: 84fdd09f-dcac-4b78-8228-c9ac3700aada
	I0811 23:25:31.452348   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.452355   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.452361   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.452368   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.452556   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:31.452963   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-2zwtc
	I0811 23:25:31.452978   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.452986   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.452994   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.455293   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.455314   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.455322   71330 round_trippers.go:580]     Audit-Id: cb0ff447-93bd-48e4-9202-606535b05013
	I0811 23:25:31.455330   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.455337   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.455343   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.455353   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.455367   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.455532   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"427","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0811 23:25:31.456038   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:31.456053   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.456062   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.456070   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.458403   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.458422   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.458431   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.458438   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.458445   71330 round_trippers.go:580]     Audit-Id: bf812c72-0d10-411f-bf63-c199d1633155
	I0811 23:25:31.458454   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.458460   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.458467   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.458585   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:31.959518   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-2zwtc
	I0811 23:25:31.959594   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.959618   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.959638   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.962427   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.962454   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.962464   71330 round_trippers.go:580]     Audit-Id: 47bc7c5d-3f3a-4380-80db-de28fb0c1640
	I0811 23:25:31.962471   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.962478   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.962486   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.962493   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.962500   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.963167   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"427","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0811 23:25:31.963737   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:31.963753   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:31.963763   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:31.963775   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:31.966257   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:31.966276   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:31.966284   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:31 GMT
	I0811 23:25:31.966291   71330 round_trippers.go:580]     Audit-Id: 5ae54192-6aa0-465b-9598-1c84b1dc89d2
	I0811 23:25:31.966298   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:31.966304   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:31.966311   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:31.966318   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:31.966741   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:32.459247   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-2zwtc
	I0811 23:25:32.459283   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.459292   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.459303   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.462152   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.462180   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.462190   71330 round_trippers.go:580]     Audit-Id: 30d203b6-058c-445b-90fb-0e85b9599a3c
	I0811 23:25:32.462197   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.462205   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.462212   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.462218   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.462225   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.462324   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"440","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0811 23:25:32.462839   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:32.462846   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.462854   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.462861   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.465319   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.465343   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.465375   71330 round_trippers.go:580]     Audit-Id: 16f30ec1-23cf-47f3-b6ed-e0c14b2ad6af
	I0811 23:25:32.465389   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.465397   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.465406   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.465415   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.465437   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.465595   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:32.466020   71330 pod_ready.go:92] pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:32.466039   71330 pod_ready.go:81] duration metric: took 1.019355631s waiting for pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.466063   71330 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.466141   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-891155
	I0811 23:25:32.466150   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.466158   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.466168   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.468639   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.468664   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.468673   71330 round_trippers.go:580]     Audit-Id: 9b686578-313e-4e3d-87e6-b78544f80d5e
	I0811 23:25:32.468679   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.468686   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.468693   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.468704   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.468711   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.468969   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-891155","namespace":"kube-system","uid":"3f2510f6-83c6-4da5-b61c-e95f02efe646","resourceVersion":"293","creationTimestamp":"2023-08-11T23:24:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac964265a61a9bdbd78ff9211e52f7d4","kubernetes.io/config.mirror":"ac964265a61a9bdbd78ff9211e52f7d4","kubernetes.io/config.seen":"2023-08-11T23:24:38.768275404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0811 23:25:32.469447   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:32.469466   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.469475   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.469483   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.472030   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.472064   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.472073   71330 round_trippers.go:580]     Audit-Id: 26de898f-79ca-4d67-bcfc-c0fa187f8208
	I0811 23:25:32.472081   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.472088   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.472098   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.472105   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.472114   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.472436   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:32.472827   71330 pod_ready.go:92] pod "etcd-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:32.472846   71330 pod_ready.go:81] duration metric: took 6.769278ms waiting for pod "etcd-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.472861   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.472928   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-891155
	I0811 23:25:32.472938   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.472946   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.472953   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.475607   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.475664   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.475687   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.475710   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.475747   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.475772   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.475796   71330 round_trippers.go:580]     Audit-Id: 66062e43-3c5f-441a-8de4-4f23bc6a3997
	I0811 23:25:32.475803   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.475954   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-891155","namespace":"kube-system","uid":"dfd78e52-0afb-4e5b-95e3-875b2bcee96a","resourceVersion":"291","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e356ac5d6e97af27265e5b5cb0b92081","kubernetes.io/config.mirror":"e356ac5d6e97af27265e5b5cb0b92081","kubernetes.io/config.seen":"2023-08-11T23:24:46.098345297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0811 23:25:32.476559   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:32.476575   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.476584   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.476592   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.483657   71330 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0811 23:25:32.483685   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.483695   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.483703   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.483710   71330 round_trippers.go:580]     Audit-Id: 0e4f58b9-095d-4a79-8868-d3cb8aa7d03e
	I0811 23:25:32.483716   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.483723   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.483731   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.484040   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:32.484432   71330 pod_ready.go:92] pod "kube-apiserver-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:32.484443   71330 pod_ready.go:81] duration metric: took 11.569159ms waiting for pod "kube-apiserver-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.484454   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.484511   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-891155
	I0811 23:25:32.484516   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.484523   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.484530   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.487014   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.487033   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.487041   71330 round_trippers.go:580]     Audit-Id: 591b1266-97d9-4ff8-b8c7-f434b306ec87
	I0811 23:25:32.487065   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.487080   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.487088   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.487099   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.487107   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.487328   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-891155","namespace":"kube-system","uid":"c685b575-39b4-4046-bb4d-eae4f5a3ce41","resourceVersion":"295","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9dd1280a21dd2149b12943bab840e8ed","kubernetes.io/config.mirror":"9dd1280a21dd2149b12943bab840e8ed","kubernetes.io/config.seen":"2023-08-11T23:24:46.098346733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0811 23:25:32.636137   71330 request.go:628] Waited for 148.251623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:32.636195   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:32.636203   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.636219   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.636229   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.638739   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.638762   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.638771   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.638778   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.638785   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.638795   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.638808   71330 round_trippers.go:580]     Audit-Id: dc6e4d90-6ceb-431e-bd00-e053b63c51cc
	I0811 23:25:32.638814   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.639018   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:32.639402   71330 pod_ready.go:92] pod "kube-controller-manager-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:32.639418   71330 pod_ready.go:81] duration metric: took 154.956588ms waiting for pod "kube-controller-manager-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.639430   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h2bt7" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:32.835832   71330 request.go:628] Waited for 196.315844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h2bt7
	I0811 23:25:32.835890   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h2bt7
	I0811 23:25:32.835896   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:32.835905   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:32.835940   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:32.838538   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:32.838596   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:32.838628   71330 round_trippers.go:580]     Audit-Id: 63d95847-37ad-4b0a-82c3-feee51d2f17f
	I0811 23:25:32.838647   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:32.838683   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:32.838710   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:32.838724   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:32.838732   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:32 GMT
	I0811 23:25:32.838852   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h2bt7","generateName":"kube-proxy-","namespace":"kube-system","uid":"0088ca20-d7c2-499c-8295-4cb3341df94e","resourceVersion":"406","creationTimestamp":"2023-08-11T23:25:00Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8f2791eb-4070-4661-bec7-2fb7609006cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f2791eb-4070-4661-bec7-2fb7609006cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0811 23:25:33.035697   71330 request.go:628] Waited for 196.352866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:33.035779   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:33.035790   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.035799   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.035807   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.038528   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:33.038599   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.038622   71330 round_trippers.go:580]     Audit-Id: 2aae3e97-477d-44b0-bdc1-e08374a4e80c
	I0811 23:25:33.038636   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.038643   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.038650   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.038671   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.038679   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.038837   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:33.039250   71330 pod_ready.go:92] pod "kube-proxy-h2bt7" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:33.039269   71330 pod_ready.go:81] duration metric: took 399.826777ms waiting for pod "kube-proxy-h2bt7" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:33.039280   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:33.235622   71330 request.go:628] Waited for 196.27911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-891155
	I0811 23:25:33.235696   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-891155
	I0811 23:25:33.235707   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.235716   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.235727   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.238364   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:33.238390   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.238399   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.238407   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.238413   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.238420   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.238434   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.238447   71330 round_trippers.go:580]     Audit-Id: 7d14b1ed-c234-40fd-85da-3b1f127e38d6
	I0811 23:25:33.238626   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-891155","namespace":"kube-system","uid":"4d802e08-54b8-4829-8dd3-a68522e6a129","resourceVersion":"322","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d9309c1ccc242467f2edd96945b86842","kubernetes.io/config.mirror":"d9309c1ccc242467f2edd96945b86842","kubernetes.io/config.seen":"2023-08-11T23:24:46.098349104Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0811 23:25:33.435308   71330 request.go:628] Waited for 196.271774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:33.435383   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:33.435427   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.435455   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.435470   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.438028   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:33.438091   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.438114   71330 round_trippers.go:580]     Audit-Id: a53fdadb-0b61-47a7-a0ba-017493b28330
	I0811 23:25:33.438141   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.438163   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.438177   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.438185   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.438191   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.438314   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:33.438697   71330 pod_ready.go:92] pod "kube-scheduler-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:33.438713   71330 pod_ready.go:81] duration metric: took 399.426305ms waiting for pod "kube-scheduler-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:33.438725   71330 pod_ready.go:38] duration metric: took 2.000102261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:33.438739   71330 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:25:33.438835   71330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:25:33.450739   71330 command_runner.go:130] > 1271
	I0811 23:25:33.452104   71330 api_server.go:72] duration metric: took 34.216668642s to wait for apiserver process to appear ...
	I0811 23:25:33.452127   71330 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:25:33.452144   71330 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 23:25:33.462064   71330 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0811 23:25:33.462136   71330 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0811 23:25:33.462146   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.462155   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.462162   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.463378   71330 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0811 23:25:33.463399   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.463407   71330 round_trippers.go:580]     Content-Length: 263
	I0811 23:25:33.463426   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.463449   71330 round_trippers.go:580]     Audit-Id: 51b650dc-18cb-4d5b-87ad-41c63780a0ef
	I0811 23:25:33.463461   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.463468   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.463478   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.463484   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.463515   71330 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0811 23:25:33.463621   71330 api_server.go:141] control plane version: v1.27.4
	I0811 23:25:33.463640   71330 api_server.go:131] duration metric: took 11.506316ms to wait for apiserver health ...
	I0811 23:25:33.463649   71330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:25:33.636029   71330 request.go:628] Waited for 172.305564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:33.636107   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:33.636119   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.636128   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.636136   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.639550   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:33.639572   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.639581   71330 round_trippers.go:580]     Audit-Id: 10964662-d52c-4ac3-b9b5-769114b47add
	I0811 23:25:33.639588   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.639594   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.639602   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.639611   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.639625   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.640368   71330 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"440","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0811 23:25:33.642718   71330 system_pods.go:59] 8 kube-system pods found
	I0811 23:25:33.642747   71330 system_pods.go:61] "coredns-5d78c9869d-2zwtc" [bdd66be6-b910-4f6f-8679-d0b0009e0cf4] Running
	I0811 23:25:33.642754   71330 system_pods.go:61] "etcd-multinode-891155" [3f2510f6-83c6-4da5-b61c-e95f02efe646] Running
	I0811 23:25:33.642759   71330 system_pods.go:61] "kindnet-jjmpp" [9cac9d75-0974-4abc-975e-b7a786d44c90] Running
	I0811 23:25:33.642769   71330 system_pods.go:61] "kube-apiserver-multinode-891155" [dfd78e52-0afb-4e5b-95e3-875b2bcee96a] Running
	I0811 23:25:33.642779   71330 system_pods.go:61] "kube-controller-manager-multinode-891155" [c685b575-39b4-4046-bb4d-eae4f5a3ce41] Running
	I0811 23:25:33.642784   71330 system_pods.go:61] "kube-proxy-h2bt7" [0088ca20-d7c2-499c-8295-4cb3341df94e] Running
	I0811 23:25:33.642789   71330 system_pods.go:61] "kube-scheduler-multinode-891155" [4d802e08-54b8-4829-8dd3-a68522e6a129] Running
	I0811 23:25:33.642795   71330 system_pods.go:61] "storage-provisioner" [65117b46-887a-42ef-8c9f-bb2f789898e5] Running
	I0811 23:25:33.642808   71330 system_pods.go:74] duration metric: took 179.154449ms to wait for pod list to return data ...
	I0811 23:25:33.642817   71330 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:25:33.836172   71330 request.go:628] Waited for 193.283567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:25:33.836268   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0811 23:25:33.836297   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:33.836313   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:33.836321   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:33.838992   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:33.839017   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:33.839034   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:33.839042   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:33.839066   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:33.839082   71330 round_trippers.go:580]     Content-Length: 261
	I0811 23:25:33.839091   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:33 GMT
	I0811 23:25:33.839101   71330 round_trippers.go:580]     Audit-Id: 7d995873-8e94-427b-9b33-a201abfdd1f3
	I0811 23:25:33.839126   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:33.839160   71330 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"930d705c-f0c1-406d-b024-0d0c61aa40eb","resourceVersion":"331","creationTimestamp":"2023-08-11T23:24:59Z"}}]}
	I0811 23:25:33.839382   71330 default_sa.go:45] found service account: "default"
	I0811 23:25:33.839402   71330 default_sa.go:55] duration metric: took 196.575516ms for default service account to be created ...
	I0811 23:25:33.839429   71330 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:25:34.035886   71330 request.go:628] Waited for 196.38677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:34.035965   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:34.035975   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:34.035984   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:34.035992   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:34.040062   71330 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0811 23:25:34.040093   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:34.040104   71330 round_trippers.go:580]     Audit-Id: 9ea1dece-abd2-4f6f-8bbb-722876104092
	I0811 23:25:34.040111   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:34.040118   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:34.040126   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:34.040138   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:34.040150   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:34 GMT
	I0811 23:25:34.040812   71330 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"440","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0811 23:25:34.043158   71330 system_pods.go:86] 8 kube-system pods found
	I0811 23:25:34.043190   71330 system_pods.go:89] "coredns-5d78c9869d-2zwtc" [bdd66be6-b910-4f6f-8679-d0b0009e0cf4] Running
	I0811 23:25:34.043198   71330 system_pods.go:89] "etcd-multinode-891155" [3f2510f6-83c6-4da5-b61c-e95f02efe646] Running
	I0811 23:25:34.043203   71330 system_pods.go:89] "kindnet-jjmpp" [9cac9d75-0974-4abc-975e-b7a786d44c90] Running
	I0811 23:25:34.043208   71330 system_pods.go:89] "kube-apiserver-multinode-891155" [dfd78e52-0afb-4e5b-95e3-875b2bcee96a] Running
	I0811 23:25:34.043214   71330 system_pods.go:89] "kube-controller-manager-multinode-891155" [c685b575-39b4-4046-bb4d-eae4f5a3ce41] Running
	I0811 23:25:34.043219   71330 system_pods.go:89] "kube-proxy-h2bt7" [0088ca20-d7c2-499c-8295-4cb3341df94e] Running
	I0811 23:25:34.043231   71330 system_pods.go:89] "kube-scheduler-multinode-891155" [4d802e08-54b8-4829-8dd3-a68522e6a129] Running
	I0811 23:25:34.043237   71330 system_pods.go:89] "storage-provisioner" [65117b46-887a-42ef-8c9f-bb2f789898e5] Running
	I0811 23:25:34.043244   71330 system_pods.go:126] duration metric: took 203.809136ms to wait for k8s-apps to be running ...
	I0811 23:25:34.043255   71330 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:25:34.043316   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:34.057790   71330 system_svc.go:56] duration metric: took 14.52315ms WaitForService to wait for kubelet.
	I0811 23:25:34.057819   71330 kubeadm.go:581] duration metric: took 34.822391368s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:25:34.057852   71330 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:25:34.236229   71330 request.go:628] Waited for 178.296665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0811 23:25:34.236304   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0811 23:25:34.236316   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:34.236326   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:34.236335   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:34.238814   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:34.238838   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:34.238846   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:34.238854   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:34.238860   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:34.238921   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:34.238938   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:34 GMT
	I0811 23:25:34.238946   71330 round_trippers.go:580]     Audit-Id: 61968712-8f1e-465f-b04d-5157bb121987
	I0811 23:25:34.239069   71330 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0811 23:25:34.239538   71330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:25:34.239562   71330 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:34.239594   71330 node_conditions.go:105] duration metric: took 181.735045ms to run NodePressure ...
	I0811 23:25:34.239606   71330 start.go:228] waiting for startup goroutines ...
	I0811 23:25:34.239619   71330 start.go:233] waiting for cluster config update ...
	I0811 23:25:34.239630   71330 start.go:242] writing updated cluster config ...
	I0811 23:25:34.242237   71330 out.go:177] 
	I0811 23:25:34.244206   71330 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:25:34.244312   71330 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json ...
	I0811 23:25:34.246746   71330 out.go:177] * Starting worker node multinode-891155-m02 in cluster multinode-891155
	I0811 23:25:34.248516   71330 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:25:34.250425   71330 out.go:177] * Pulling base image ...
	I0811 23:25:34.252594   71330 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:25:34.252636   71330 cache.go:57] Caching tarball of preloaded images
	I0811 23:25:34.252676   71330 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:25:34.252751   71330 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:25:34.252763   71330 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:25:34.252885   71330 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json ...
	I0811 23:25:34.278644   71330 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:25:34.278678   71330 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:25:34.278703   71330 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:25:34.278733   71330 start.go:365] acquiring machines lock for multinode-891155-m02: {Name:mkb6523984040475767f74aa655c062d898836be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:25:34.278862   71330 start.go:369] acquired machines lock for "multinode-891155-m02" in 110.565µs
	I0811 23:25:34.278890   71330 start.go:93] Provisioning new machine with config: &{Name:multinode-891155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0811 23:25:34.279008   71330 start.go:125] createHost starting for "m02" (driver="docker")
	I0811 23:25:34.282572   71330 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 23:25:34.282701   71330 start.go:159] libmachine.API.Create for "multinode-891155" (driver="docker")
	I0811 23:25:34.282731   71330 client.go:168] LocalClient.Create starting
	I0811 23:25:34.282813   71330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:25:34.282849   71330 main.go:141] libmachine: Decoding PEM data...
	I0811 23:25:34.282865   71330 main.go:141] libmachine: Parsing certificate...
	I0811 23:25:34.282923   71330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:25:34.282944   71330 main.go:141] libmachine: Decoding PEM data...
	I0811 23:25:34.282954   71330 main.go:141] libmachine: Parsing certificate...
	I0811 23:25:34.283230   71330 cli_runner.go:164] Run: docker network inspect multinode-891155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:25:34.301797   71330 network_create.go:76] Found existing network {name:multinode-891155 subnet:0x400138ab70 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0811 23:25:34.301837   71330 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-891155-m02" container
	I0811 23:25:34.301910   71330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:25:34.323201   71330 cli_runner.go:164] Run: docker volume create multinode-891155-m02 --label name.minikube.sigs.k8s.io=multinode-891155-m02 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:25:34.343866   71330 oci.go:103] Successfully created a docker volume multinode-891155-m02
	I0811 23:25:34.343952   71330 cli_runner.go:164] Run: docker run --rm --name multinode-891155-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-891155-m02 --entrypoint /usr/bin/test -v multinode-891155-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:25:34.927727   71330 oci.go:107] Successfully prepared a docker volume multinode-891155-m02
	I0811 23:25:34.927767   71330 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:25:34.927786   71330 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:25:34.927875   71330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-891155-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:25:39.135251   71330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-891155-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.207333577s)
	I0811 23:25:39.135284   71330 kic.go:199] duration metric: took 4.207494 seconds to extract preloaded images to volume
	W0811 23:25:39.135426   71330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:25:39.135566   71330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:25:39.211271   71330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-891155-m02 --name multinode-891155-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-891155-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-891155-m02 --network multinode-891155 --ip 192.168.58.3 --volume multinode-891155-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:25:39.601491   71330 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Running}}
	I0811 23:25:39.624912   71330 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Status}}
	I0811 23:25:39.657057   71330 cli_runner.go:164] Run: docker exec multinode-891155-m02 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:25:39.734038   71330 oci.go:144] the created container "multinode-891155-m02" has a running status.
	I0811 23:25:39.734070   71330 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa...
	I0811 23:25:40.431418   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 23:25:40.431468   71330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:25:40.461877   71330 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Status}}
	I0811 23:25:40.502796   71330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:25:40.502823   71330 kic_runner.go:114] Args: [docker exec --privileged multinode-891155-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:25:40.591217   71330 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Status}}
	I0811 23:25:40.621164   71330 machine.go:88] provisioning docker machine ...
	I0811 23:25:40.621193   71330 ubuntu.go:169] provisioning hostname "multinode-891155-m02"
	I0811 23:25:40.621259   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:40.649002   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:40.649487   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0811 23:25:40.649508   71330 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-891155-m02 && echo "multinode-891155-m02" | sudo tee /etc/hostname
	I0811 23:25:40.849399   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-891155-m02
	
	I0811 23:25:40.849476   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:40.879891   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:40.880367   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0811 23:25:40.880387   71330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-891155-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-891155-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-891155-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:25:41.042505   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:25:41.042531   71330 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:25:41.042548   71330 ubuntu.go:177] setting up certificates
	I0811 23:25:41.042557   71330 provision.go:83] configureAuth start
	I0811 23:25:41.042615   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155-m02
	I0811 23:25:41.062449   71330 provision.go:138] copyHostCerts
	I0811 23:25:41.062493   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:25:41.062525   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:25:41.062536   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:25:41.062617   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:25:41.062703   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:25:41.062724   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:25:41.062731   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:25:41.062759   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:25:41.062804   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:25:41.062825   71330 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:25:41.062833   71330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:25:41.062861   71330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:25:41.062914   71330 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.multinode-891155-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-891155-m02]
	I0811 23:25:41.219088   71330 provision.go:172] copyRemoteCerts
	I0811 23:25:41.219153   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:25:41.219204   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:41.237608   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:25:41.343826   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:25:41.343935   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:25:41.373571   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:25:41.373647   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:25:41.403578   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:25:41.403639   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0811 23:25:41.432579   71330 provision.go:86] duration metric: configureAuth took 390.008158ms
	I0811 23:25:41.432604   71330 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:25:41.432804   71330 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:25:41.432903   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:41.452635   71330 main.go:141] libmachine: Using SSH client type: native
	I0811 23:25:41.453061   71330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0811 23:25:41.453077   71330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:25:41.709406   71330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:25:41.709427   71330 machine.go:91] provisioned docker machine in 1.088244685s
	I0811 23:25:41.709436   71330 client.go:171] LocalClient.Create took 7.426700158s
	I0811 23:25:41.709448   71330 start.go:167] duration metric: libmachine.API.Create for "multinode-891155" took 7.426751399s
	I0811 23:25:41.709457   71330 start.go:300] post-start starting for "multinode-891155-m02" (driver="docker")
	I0811 23:25:41.709466   71330 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:25:41.709529   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:25:41.709577   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:41.729145   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:25:41.832379   71330 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:25:41.836336   71330 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0811 23:25:41.836353   71330 command_runner.go:130] > NAME="Ubuntu"
	I0811 23:25:41.836360   71330 command_runner.go:130] > VERSION_ID="22.04"
	I0811 23:25:41.836367   71330 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0811 23:25:41.836372   71330 command_runner.go:130] > VERSION_CODENAME=jammy
	I0811 23:25:41.836377   71330 command_runner.go:130] > ID=ubuntu
	I0811 23:25:41.836382   71330 command_runner.go:130] > ID_LIKE=debian
	I0811 23:25:41.836387   71330 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0811 23:25:41.836397   71330 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0811 23:25:41.836403   71330 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0811 23:25:41.836412   71330 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0811 23:25:41.836420   71330 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0811 23:25:41.836460   71330 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:25:41.836489   71330 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:25:41.836500   71330 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:25:41.836512   71330 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:25:41.836522   71330 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:25:41.836579   71330 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:25:41.836662   71330 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:25:41.836673   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /etc/ssl/certs/76342.pem
	I0811 23:25:41.836774   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:25:41.847507   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:25:41.876766   71330 start.go:303] post-start completed in 167.295194ms
	I0811 23:25:41.877259   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155-m02
	I0811 23:25:41.899086   71330 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/config.json ...
	I0811 23:25:41.899380   71330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:25:41.899428   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:41.918169   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:25:42.027967   71330 command_runner.go:130] > 11%!
	(MISSING)I0811 23:25:42.028243   71330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:25:42.037716   71330 command_runner.go:130] > 174G
	I0811 23:25:42.038323   71330 start.go:128] duration metric: createHost completed in 7.759302597s
	I0811 23:25:42.038347   71330 start.go:83] releasing machines lock for "multinode-891155-m02", held for 7.75947617s
	I0811 23:25:42.038430   71330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155-m02
	I0811 23:25:42.067085   71330 out.go:177] * Found network options:
	I0811 23:25:42.068755   71330 out.go:177]   - NO_PROXY=192.168.58.2
	W0811 23:25:42.070838   71330 proxy.go:119] fail to check proxy env: Error ip not in block
	W0811 23:25:42.070886   71330 proxy.go:119] fail to check proxy env: Error ip not in block
	I0811 23:25:42.070969   71330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:25:42.071018   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:42.071302   71330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:25:42.071359   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:25:42.093966   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:25:42.100540   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:25:42.361426   71330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:25:42.361490   71330 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0811 23:25:42.367253   71330 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0811 23:25:42.367277   71330 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0811 23:25:42.367286   71330 command_runner.go:130] > Device: b3h/179d	Inode: 1302513     Links: 1
	I0811 23:25:42.367293   71330 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:25:42.367301   71330 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0811 23:25:42.367307   71330 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0811 23:25:42.367314   71330 command_runner.go:130] > Change: 2023-08-11 23:01:53.862838644 +0000
	I0811 23:25:42.367325   71330 command_runner.go:130] >  Birth: 2023-08-11 23:01:53.862838644 +0000
	I0811 23:25:42.367411   71330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:25:42.390850   71330 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:25:42.390994   71330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:25:42.431905   71330 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0811 23:25:42.431936   71330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:25:42.431943   71330 start.go:466] detecting cgroup driver to use...
	I0811 23:25:42.431976   71330 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:25:42.432028   71330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:25:42.451312   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:25:42.466356   71330 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:25:42.466459   71330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:25:42.483565   71330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:25:42.500740   71330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:25:42.606805   71330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:25:42.622937   71330 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0811 23:25:42.714874   71330 docker.go:212] disabling docker service ...
	I0811 23:25:42.714980   71330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:25:42.737446   71330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:25:42.751757   71330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:25:42.851973   71330 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0811 23:25:42.852105   71330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:25:42.866345   71330 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0811 23:25:42.967292   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:25:42.980485   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:25:42.999176   71330 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0811 23:25:43.000434   71330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0811 23:25:43.000527   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:25:43.015745   71330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0811 23:25:43.015853   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:25:43.029761   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:25:43.042729   71330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:25:43.055537   71330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:25:43.068233   71330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:25:43.077487   71330 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0811 23:25:43.078714   71330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:25:43.089059   71330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:25:43.190528   71330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:25:43.334179   71330 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:25:43.334278   71330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:25:43.339037   71330 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0811 23:25:43.339098   71330 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0811 23:25:43.339119   71330 command_runner.go:130] > Device: bdh/189d	Inode: 186         Links: 1
	I0811 23:25:43.339157   71330 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:25:43.339186   71330 command_runner.go:130] > Access: 2023-08-11 23:25:43.319935505 +0000
	I0811 23:25:43.339210   71330 command_runner.go:130] > Modify: 2023-08-11 23:25:43.319935505 +0000
	I0811 23:25:43.339229   71330 command_runner.go:130] > Change: 2023-08-11 23:25:43.319935505 +0000
	I0811 23:25:43.339247   71330 command_runner.go:130] >  Birth: -
	I0811 23:25:43.339587   71330 start.go:534] Will wait 60s for crictl version
	I0811 23:25:43.339671   71330 ssh_runner.go:195] Run: which crictl
	I0811 23:25:43.344295   71330 command_runner.go:130] > /usr/bin/crictl
	I0811 23:25:43.344803   71330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:25:43.389027   71330 command_runner.go:130] > Version:  0.1.0
	I0811 23:25:43.389119   71330 command_runner.go:130] > RuntimeName:  cri-o
	I0811 23:25:43.389140   71330 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0811 23:25:43.389166   71330 command_runner.go:130] > RuntimeApiVersion:  v1
	I0811 23:25:43.391428   71330 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:25:43.391552   71330 ssh_runner.go:195] Run: crio --version
	I0811 23:25:43.435562   71330 command_runner.go:130] > crio version 1.24.6
	I0811 23:25:43.435629   71330 command_runner.go:130] > Version:          1.24.6
	I0811 23:25:43.435652   71330 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0811 23:25:43.435698   71330 command_runner.go:130] > GitTreeState:     clean
	I0811 23:25:43.435723   71330 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0811 23:25:43.435744   71330 command_runner.go:130] > GoVersion:        go1.18.2
	I0811 23:25:43.435764   71330 command_runner.go:130] > Compiler:         gc
	I0811 23:25:43.435783   71330 command_runner.go:130] > Platform:         linux/arm64
	I0811 23:25:43.435805   71330 command_runner.go:130] > Linkmode:         dynamic
	I0811 23:25:43.435829   71330 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0811 23:25:43.435849   71330 command_runner.go:130] > SeccompEnabled:   true
	I0811 23:25:43.435869   71330 command_runner.go:130] > AppArmorEnabled:  false
	I0811 23:25:43.438019   71330 ssh_runner.go:195] Run: crio --version
	I0811 23:25:43.482493   71330 command_runner.go:130] > crio version 1.24.6
	I0811 23:25:43.482602   71330 command_runner.go:130] > Version:          1.24.6
	I0811 23:25:43.482620   71330 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0811 23:25:43.482635   71330 command_runner.go:130] > GitTreeState:     clean
	I0811 23:25:43.482654   71330 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0811 23:25:43.482661   71330 command_runner.go:130] > GoVersion:        go1.18.2
	I0811 23:25:43.482686   71330 command_runner.go:130] > Compiler:         gc
	I0811 23:25:43.482692   71330 command_runner.go:130] > Platform:         linux/arm64
	I0811 23:25:43.482711   71330 command_runner.go:130] > Linkmode:         dynamic
	I0811 23:25:43.482729   71330 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0811 23:25:43.482753   71330 command_runner.go:130] > SeccompEnabled:   true
	I0811 23:25:43.482765   71330 command_runner.go:130] > AppArmorEnabled:  false
	I0811 23:25:43.485550   71330 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0811 23:25:43.487406   71330 out.go:177]   - env NO_PROXY=192.168.58.2
	I0811 23:25:43.488887   71330 cli_runner.go:164] Run: docker network inspect multinode-891155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:25:43.507182   71330 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0811 23:25:43.511959   71330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:43.525692   71330 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155 for IP: 192.168.58.3
	I0811 23:25:43.525724   71330 certs.go:190] acquiring lock for shared ca certs: {Name:mk92ef0e52f7a4bf6e55e35fe7431dc846a67439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:25:43.525871   71330 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key
	I0811 23:25:43.525924   71330 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key
	I0811 23:25:43.525944   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0811 23:25:43.525963   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0811 23:25:43.525980   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0811 23:25:43.525992   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0811 23:25:43.526048   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem (1338 bytes)
	W0811 23:25:43.526083   71330 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634_empty.pem, impossibly tiny 0 bytes
	I0811 23:25:43.526095   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 23:25:43.526138   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem (1082 bytes)
	I0811 23:25:43.526166   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:25:43.526191   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem (1675 bytes)
	I0811 23:25:43.526239   71330 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:25:43.526271   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:43.526287   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem -> /usr/share/ca-certificates/7634.pem
	I0811 23:25:43.526299   71330 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /usr/share/ca-certificates/76342.pem
	I0811 23:25:43.526623   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:25:43.558806   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0811 23:25:43.588481   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:25:43.617661   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 23:25:43.647254   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:25:43.677165   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem --> /usr/share/ca-certificates/7634.pem (1338 bytes)
	I0811 23:25:43.708794   71330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /usr/share/ca-certificates/76342.pem (1708 bytes)
	I0811 23:25:43.742744   71330 ssh_runner.go:195] Run: openssl version
	I0811 23:25:43.751731   71330 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0811 23:25:43.751843   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:25:43.767246   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:43.773254   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:43.773386   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:43.773493   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:25:43.783676   71330 command_runner.go:130] > b5213941
	I0811 23:25:43.784182   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:25:43.799034   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7634.pem && ln -fs /usr/share/ca-certificates/7634.pem /etc/ssl/certs/7634.pem"
	I0811 23:25:43.810774   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7634.pem
	I0811 23:25:43.817409   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:25:43.817604   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:25:43.817699   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7634.pem
	I0811 23:25:43.830234   71330 command_runner.go:130] > 51391683
	I0811 23:25:43.830923   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7634.pem /etc/ssl/certs/51391683.0"
	I0811 23:25:43.846937   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76342.pem && ln -fs /usr/share/ca-certificates/76342.pem /etc/ssl/certs/76342.pem"
	I0811 23:25:43.863482   71330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76342.pem
	I0811 23:25:43.869045   71330 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:25:43.869366   71330 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:25:43.869456   71330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76342.pem
	I0811 23:25:43.877851   71330 command_runner.go:130] > 3ec20f2e
	I0811 23:25:43.878304   71330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76342.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:25:43.890765   71330 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:25:43.895359   71330 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:43.895740   71330 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0811 23:25:43.895846   71330 ssh_runner.go:195] Run: crio config
	I0811 23:25:43.951836   71330 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0811 23:25:43.951908   71330 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0811 23:25:43.951932   71330 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0811 23:25:43.951952   71330 command_runner.go:130] > #
	I0811 23:25:43.951983   71330 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0811 23:25:43.952010   71330 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0811 23:25:43.952032   71330 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0811 23:25:43.952056   71330 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0811 23:25:43.952088   71330 command_runner.go:130] > # reload'.
	I0811 23:25:43.952112   71330 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0811 23:25:43.952136   71330 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0811 23:25:43.952158   71330 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0811 23:25:43.952197   71330 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0811 23:25:43.952222   71330 command_runner.go:130] > [crio]
	I0811 23:25:43.952245   71330 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0811 23:25:43.952267   71330 command_runner.go:130] > # containers images, in this directory.
	I0811 23:25:43.952309   71330 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0811 23:25:43.952333   71330 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0811 23:25:43.952354   71330 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0811 23:25:43.952377   71330 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0811 23:25:43.952412   71330 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0811 23:25:43.952748   71330 command_runner.go:130] > # storage_driver = "vfs"
	I0811 23:25:43.952789   71330 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0811 23:25:43.952810   71330 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0811 23:25:43.952871   71330 command_runner.go:130] > # storage_option = [
	I0811 23:25:43.952897   71330 command_runner.go:130] > # ]
	I0811 23:25:43.952919   71330 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0811 23:25:43.952941   71330 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0811 23:25:43.953327   71330 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0811 23:25:43.953369   71330 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0811 23:25:43.953393   71330 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0811 23:25:43.953418   71330 command_runner.go:130] > # always happen on a node reboot
	I0811 23:25:43.953452   71330 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0811 23:25:43.953481   71330 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0811 23:25:43.953507   71330 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0811 23:25:43.953533   71330 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0811 23:25:43.953566   71330 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0811 23:25:43.953598   71330 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0811 23:25:43.953623   71330 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0811 23:25:43.953643   71330 command_runner.go:130] > # internal_wipe = true
	I0811 23:25:43.953683   71330 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0811 23:25:43.953707   71330 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0811 23:25:43.953727   71330 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0811 23:25:43.953748   71330 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0811 23:25:43.953781   71330 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0811 23:25:43.953802   71330 command_runner.go:130] > [crio.api]
	I0811 23:25:43.953821   71330 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0811 23:25:43.953841   71330 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0811 23:25:43.953863   71330 command_runner.go:130] > # IP address on which the stream server will listen.
	I0811 23:25:43.953890   71330 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0811 23:25:43.953916   71330 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0811 23:25:43.953938   71330 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0811 23:25:43.953959   71330 command_runner.go:130] > # stream_port = "0"
	I0811 23:25:43.953991   71330 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0811 23:25:43.954013   71330 command_runner.go:130] > # stream_enable_tls = false
	I0811 23:25:43.954034   71330 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0811 23:25:43.954054   71330 command_runner.go:130] > # stream_idle_timeout = ""
	I0811 23:25:43.954091   71330 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0811 23:25:43.954115   71330 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0811 23:25:43.954132   71330 command_runner.go:130] > # minutes.
	I0811 23:25:43.954152   71330 command_runner.go:130] > # stream_tls_cert = ""
	I0811 23:25:43.954174   71330 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0811 23:25:43.954209   71330 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0811 23:25:43.954229   71330 command_runner.go:130] > # stream_tls_key = ""
	I0811 23:25:43.954252   71330 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0811 23:25:43.954283   71330 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0811 23:25:43.954306   71330 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0811 23:25:43.954327   71330 command_runner.go:130] > # stream_tls_ca = ""
	I0811 23:25:43.954350   71330 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0811 23:25:43.954379   71330 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0811 23:25:43.954406   71330 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0811 23:25:43.954423   71330 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0811 23:25:43.954501   71330 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0811 23:25:43.954528   71330 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0811 23:25:43.954547   71330 command_runner.go:130] > [crio.runtime]
	I0811 23:25:43.954571   71330 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0811 23:25:43.954604   71330 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0811 23:25:43.954626   71330 command_runner.go:130] > # "nofile=1024:2048"
	I0811 23:25:43.954648   71330 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0811 23:25:43.954758   71330 command_runner.go:130] > # default_ulimits = [
	I0811 23:25:43.954786   71330 command_runner.go:130] > # ]
	I0811 23:25:43.954808   71330 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0811 23:25:43.954848   71330 command_runner.go:130] > # no_pivot = false
	I0811 23:25:43.954871   71330 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0811 23:25:43.954896   71330 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0811 23:25:43.954929   71330 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0811 23:25:43.954955   71330 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0811 23:25:43.954978   71330 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0811 23:25:43.955002   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0811 23:25:43.955029   71330 command_runner.go:130] > # conmon = ""
	I0811 23:25:43.955052   71330 command_runner.go:130] > # Cgroup setting for conmon
	I0811 23:25:43.955074   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0811 23:25:43.955093   71330 command_runner.go:130] > conmon_cgroup = "pod"
	I0811 23:25:43.955126   71330 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0811 23:25:43.955148   71330 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0811 23:25:43.955170   71330 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0811 23:25:43.955192   71330 command_runner.go:130] > # conmon_env = [
	I0811 23:25:43.955210   71330 command_runner.go:130] > # ]
	I0811 23:25:43.955243   71330 command_runner.go:130] > # Additional environment variables to set for all the
	I0811 23:25:43.955263   71330 command_runner.go:130] > # containers. These are overridden if set in the
	I0811 23:25:43.955291   71330 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0811 23:25:43.955321   71330 command_runner.go:130] > # default_env = [
	I0811 23:25:43.955345   71330 command_runner.go:130] > # ]
	I0811 23:25:43.955367   71330 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0811 23:25:43.955386   71330 command_runner.go:130] > # selinux = false
	I0811 23:25:43.955408   71330 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0811 23:25:43.955441   71330 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0811 23:25:43.955461   71330 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0811 23:25:43.955481   71330 command_runner.go:130] > # seccomp_profile = ""
	I0811 23:25:43.955511   71330 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0811 23:25:43.955534   71330 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0811 23:25:43.955555   71330 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0811 23:25:43.955573   71330 command_runner.go:130] > # which might increase security.
	I0811 23:25:43.955593   71330 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0811 23:25:43.955625   71330 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0811 23:25:43.955647   71330 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0811 23:25:43.955677   71330 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0811 23:25:43.955707   71330 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0811 23:25:43.955729   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:25:43.955749   71330 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0811 23:25:43.955782   71330 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0811 23:25:43.955808   71330 command_runner.go:130] > # the cgroup blockio controller.
	I0811 23:25:43.955825   71330 command_runner.go:130] > # blockio_config_file = ""
	I0811 23:25:43.955847   71330 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0811 23:25:43.955866   71330 command_runner.go:130] > # irqbalance daemon.
	I0811 23:25:43.955909   71330 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0811 23:25:43.955930   71330 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0811 23:25:43.955951   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:25:43.955980   71330 command_runner.go:130] > # rdt_config_file = ""
	I0811 23:25:43.956007   71330 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0811 23:25:43.956027   71330 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0811 23:25:43.956048   71330 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0811 23:25:43.956076   71330 command_runner.go:130] > # separate_pull_cgroup = ""
	I0811 23:25:43.956103   71330 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0811 23:25:43.956125   71330 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0811 23:25:43.956144   71330 command_runner.go:130] > # will be added.
	I0811 23:25:43.956174   71330 command_runner.go:130] > # default_capabilities = [
	I0811 23:25:43.956210   71330 command_runner.go:130] > # 	"CHOWN",
	I0811 23:25:43.956227   71330 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0811 23:25:43.956246   71330 command_runner.go:130] > # 	"FSETID",
	I0811 23:25:43.956274   71330 command_runner.go:130] > # 	"FOWNER",
	I0811 23:25:43.956294   71330 command_runner.go:130] > # 	"SETGID",
	I0811 23:25:43.956313   71330 command_runner.go:130] > # 	"SETUID",
	I0811 23:25:43.956331   71330 command_runner.go:130] > # 	"SETPCAP",
	I0811 23:25:43.956365   71330 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0811 23:25:43.956384   71330 command_runner.go:130] > # 	"KILL",
	I0811 23:25:43.956403   71330 command_runner.go:130] > # ]
	I0811 23:25:43.956438   71330 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0811 23:25:43.956466   71330 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0811 23:25:43.956487   71330 command_runner.go:130] > # add_inheritable_capabilities = true
	I0811 23:25:43.956508   71330 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0811 23:25:43.956541   71330 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0811 23:25:43.956571   71330 command_runner.go:130] > # default_sysctls = [
	I0811 23:25:43.956590   71330 command_runner.go:130] > # ]
	I0811 23:25:43.956610   71330 command_runner.go:130] > # List of devices on the host that a
	I0811 23:25:43.956643   71330 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0811 23:25:43.956671   71330 command_runner.go:130] > # allowed_devices = [
	I0811 23:25:43.956690   71330 command_runner.go:130] > # 	"/dev/fuse",
	I0811 23:25:43.956709   71330 command_runner.go:130] > # ]
	I0811 23:25:43.956746   71330 command_runner.go:130] > # List of additional devices. specified as
	I0811 23:25:43.957104   71330 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0811 23:25:43.958880   71330 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0811 23:25:43.958907   71330 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0811 23:25:43.958942   71330 command_runner.go:130] > # additional_devices = [
	I0811 23:25:43.958969   71330 command_runner.go:130] > # ]
	I0811 23:25:43.959032   71330 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0811 23:25:43.959054   71330 command_runner.go:130] > # cdi_spec_dirs = [
	I0811 23:25:43.959106   71330 command_runner.go:130] > # 	"/etc/cdi",
	I0811 23:25:43.959140   71330 command_runner.go:130] > # 	"/var/run/cdi",
	I0811 23:25:43.959185   71330 command_runner.go:130] > # ]
	I0811 23:25:43.959226   71330 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0811 23:25:43.959249   71330 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0811 23:25:43.959291   71330 command_runner.go:130] > # Defaults to false.
	I0811 23:25:43.959318   71330 command_runner.go:130] > # device_ownership_from_security_context = false
	I0811 23:25:43.959345   71330 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0811 23:25:43.959361   71330 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0811 23:25:43.959367   71330 command_runner.go:130] > # hooks_dir = [
	I0811 23:25:43.959374   71330 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0811 23:25:43.959378   71330 command_runner.go:130] > # ]
	I0811 23:25:43.959386   71330 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0811 23:25:43.959394   71330 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0811 23:25:43.959401   71330 command_runner.go:130] > # its default mounts from the following two files:
	I0811 23:25:43.959409   71330 command_runner.go:130] > #
	I0811 23:25:43.959417   71330 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0811 23:25:43.959429   71330 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0811 23:25:43.959437   71330 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0811 23:25:43.959441   71330 command_runner.go:130] > #
	I0811 23:25:43.959449   71330 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0811 23:25:43.959461   71330 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0811 23:25:43.959470   71330 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0811 23:25:43.959476   71330 command_runner.go:130] > #      only add mounts it finds in this file.
	I0811 23:25:43.959481   71330 command_runner.go:130] > #
	I0811 23:25:43.959491   71330 command_runner.go:130] > # default_mounts_file = ""
	I0811 23:25:43.959503   71330 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0811 23:25:43.959512   71330 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0811 23:25:43.959521   71330 command_runner.go:130] > # pids_limit = 0
	I0811 23:25:43.959529   71330 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0811 23:25:43.959537   71330 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0811 23:25:43.959548   71330 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0811 23:25:43.959559   71330 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0811 23:25:43.959565   71330 command_runner.go:130] > # log_size_max = -1
	I0811 23:25:43.959573   71330 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0811 23:25:43.959585   71330 command_runner.go:130] > # log_to_journald = false
	I0811 23:25:43.959592   71330 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0811 23:25:43.959599   71330 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0811 23:25:43.959610   71330 command_runner.go:130] > # Path to directory for container attach sockets.
	I0811 23:25:43.959616   71330 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0811 23:25:43.959636   71330 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0811 23:25:43.959644   71330 command_runner.go:130] > # bind_mount_prefix = ""
	I0811 23:25:43.959654   71330 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0811 23:25:43.959659   71330 command_runner.go:130] > # read_only = false
	I0811 23:25:43.959679   71330 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0811 23:25:43.959694   71330 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0811 23:25:43.959700   71330 command_runner.go:130] > # live configuration reload.
	I0811 23:25:43.959710   71330 command_runner.go:130] > # log_level = "info"
	I0811 23:25:43.959718   71330 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0811 23:25:43.959724   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:25:43.959729   71330 command_runner.go:130] > # log_filter = ""
	I0811 23:25:43.959736   71330 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0811 23:25:43.959750   71330 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0811 23:25:43.959756   71330 command_runner.go:130] > # separated by comma.
	I0811 23:25:43.959765   71330 command_runner.go:130] > # uid_mappings = ""
	I0811 23:25:43.959772   71330 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0811 23:25:43.959785   71330 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0811 23:25:43.959790   71330 command_runner.go:130] > # separated by comma.
	I0811 23:25:43.959795   71330 command_runner.go:130] > # gid_mappings = ""
	I0811 23:25:43.959810   71330 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0811 23:25:43.959817   71330 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0811 23:25:43.959826   71330 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0811 23:25:43.959833   71330 command_runner.go:130] > # minimum_mappable_uid = -1
	I0811 23:25:43.959844   71330 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0811 23:25:43.959852   71330 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0811 23:25:43.959863   71330 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0811 23:25:43.959869   71330 command_runner.go:130] > # minimum_mappable_gid = -1
	I0811 23:25:43.959880   71330 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0811 23:25:43.959896   71330 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0811 23:25:43.959910   71330 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0811 23:25:43.959915   71330 command_runner.go:130] > # ctr_stop_timeout = 30
	I0811 23:25:43.959922   71330 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0811 23:25:43.959936   71330 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0811 23:25:43.959946   71330 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0811 23:25:43.959952   71330 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0811 23:25:43.959958   71330 command_runner.go:130] > # drop_infra_ctr = true
	I0811 23:25:43.959968   71330 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0811 23:25:43.959978   71330 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0811 23:25:43.959987   71330 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0811 23:25:43.959995   71330 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0811 23:25:43.960002   71330 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0811 23:25:43.960009   71330 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0811 23:25:43.960016   71330 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0811 23:25:43.960025   71330 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0811 23:25:43.960030   71330 command_runner.go:130] > # pinns_path = ""
	I0811 23:25:43.960040   71330 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0811 23:25:43.960048   71330 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0811 23:25:43.960059   71330 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0811 23:25:43.960065   71330 command_runner.go:130] > # default_runtime = "runc"
	I0811 23:25:43.960071   71330 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0811 23:25:43.960081   71330 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0811 23:25:43.960092   71330 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0811 23:25:43.960101   71330 command_runner.go:130] > # creation as a file is not desired either.
	I0811 23:25:43.960111   71330 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0811 23:25:43.960119   71330 command_runner.go:130] > # the hostname is being managed dynamically.
	I0811 23:25:43.960125   71330 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0811 23:25:43.960130   71330 command_runner.go:130] > # ]
	I0811 23:25:43.960141   71330 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0811 23:25:43.960149   71330 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0811 23:25:43.960157   71330 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0811 23:25:43.960164   71330 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0811 23:25:43.960168   71330 command_runner.go:130] > #
	I0811 23:25:43.960174   71330 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0811 23:25:43.960183   71330 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0811 23:25:43.960188   71330 command_runner.go:130] > #  runtime_type = "oci"
	I0811 23:25:43.960194   71330 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0811 23:25:43.960202   71330 command_runner.go:130] > #  privileged_without_host_devices = false
	I0811 23:25:43.960207   71330 command_runner.go:130] > #  allowed_annotations = []
	I0811 23:25:43.960212   71330 command_runner.go:130] > # Where:
	I0811 23:25:43.960221   71330 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0811 23:25:43.960228   71330 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0811 23:25:43.960236   71330 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0811 23:25:43.960244   71330 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0811 23:25:43.960249   71330 command_runner.go:130] > #   in $PATH.
	I0811 23:25:43.960259   71330 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0811 23:25:43.960268   71330 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0811 23:25:43.960279   71330 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0811 23:25:43.960286   71330 command_runner.go:130] > #   state.
	I0811 23:25:43.960294   71330 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0811 23:25:43.960305   71330 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0811 23:25:43.960313   71330 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0811 23:25:43.960320   71330 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0811 23:25:43.960328   71330 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0811 23:25:43.960338   71330 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0811 23:25:43.960344   71330 command_runner.go:130] > #   The currently recognized values are:
	I0811 23:25:43.960356   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0811 23:25:43.960365   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0811 23:25:43.960374   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0811 23:25:43.960385   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0811 23:25:43.960394   71330 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0811 23:25:43.960402   71330 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0811 23:25:43.960409   71330 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0811 23:25:43.960419   71330 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0811 23:25:43.960428   71330 command_runner.go:130] > #   should be moved to the container's cgroup
	I0811 23:25:43.960434   71330 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0811 23:25:43.960442   71330 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0811 23:25:43.960449   71330 command_runner.go:130] > runtime_type = "oci"
	I0811 23:25:43.960455   71330 command_runner.go:130] > runtime_root = "/run/runc"
	I0811 23:25:43.960460   71330 command_runner.go:130] > runtime_config_path = ""
	I0811 23:25:43.960468   71330 command_runner.go:130] > monitor_path = ""
	I0811 23:25:43.960473   71330 command_runner.go:130] > monitor_cgroup = ""
	I0811 23:25:43.960478   71330 command_runner.go:130] > monitor_exec_cgroup = ""
	I0811 23:25:43.960500   71330 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0811 23:25:43.960508   71330 command_runner.go:130] > # running containers
	I0811 23:25:43.960514   71330 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0811 23:25:43.960522   71330 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0811 23:25:43.960536   71330 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0811 23:25:43.960593   71330 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0811 23:25:43.960600   71330 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0811 23:25:43.960606   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0811 23:25:43.960612   71330 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0811 23:25:43.960620   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0811 23:25:43.960630   71330 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0811 23:25:43.960655   71330 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0811 23:25:43.960663   71330 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0811 23:25:43.960670   71330 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0811 23:25:43.960678   71330 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0811 23:25:43.960688   71330 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0811 23:25:43.960702   71330 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0811 23:25:43.960709   71330 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0811 23:25:43.960723   71330 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0811 23:25:43.960733   71330 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0811 23:25:43.960744   71330 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0811 23:25:43.960753   71330 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0811 23:25:43.960758   71330 command_runner.go:130] > # Example:
	I0811 23:25:43.960764   71330 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0811 23:25:43.960771   71330 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0811 23:25:43.960779   71330 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0811 23:25:43.960786   71330 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0811 23:25:43.960795   71330 command_runner.go:130] > # cpuset = 0
	I0811 23:25:43.960800   71330 command_runner.go:130] > # cpushares = "0-1"
	I0811 23:25:43.960805   71330 command_runner.go:130] > # Where:
	I0811 23:25:43.960814   71330 command_runner.go:130] > # The workload name is workload-type.
	I0811 23:25:43.960823   71330 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0811 23:25:43.960833   71330 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0811 23:25:43.960840   71330 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0811 23:25:43.960851   71330 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0811 23:25:43.960858   71330 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0811 23:25:43.960865   71330 command_runner.go:130] > # 
	I0811 23:25:43.960874   71330 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0811 23:25:43.960881   71330 command_runner.go:130] > #
	I0811 23:25:43.960895   71330 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0811 23:25:43.960907   71330 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0811 23:25:43.960915   71330 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0811 23:25:43.960923   71330 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0811 23:25:43.960931   71330 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0811 23:25:43.960935   71330 command_runner.go:130] > [crio.image]
	I0811 23:25:43.960943   71330 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0811 23:25:43.960951   71330 command_runner.go:130] > # default_transport = "docker://"
	I0811 23:25:43.960959   71330 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0811 23:25:43.960968   71330 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0811 23:25:43.960976   71330 command_runner.go:130] > # global_auth_file = ""
	I0811 23:25:43.960984   71330 command_runner.go:130] > # The image used to instantiate infra containers.
	I0811 23:25:43.960993   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:25:43.960999   71330 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0811 23:25:43.961008   71330 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0811 23:25:43.961016   71330 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0811 23:25:43.961022   71330 command_runner.go:130] > # This option supports live configuration reload.
	I0811 23:25:43.961030   71330 command_runner.go:130] > # pause_image_auth_file = ""
	I0811 23:25:43.961037   71330 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0811 23:25:43.961045   71330 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0811 23:25:43.961055   71330 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0811 23:25:43.961063   71330 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0811 23:25:43.961072   71330 command_runner.go:130] > # pause_command = "/pause"
	I0811 23:25:43.961091   71330 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0811 23:25:43.961102   71330 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0811 23:25:43.961109   71330 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0811 23:25:43.961117   71330 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0811 23:25:43.961124   71330 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0811 23:25:43.961129   71330 command_runner.go:130] > # signature_policy = ""
	I0811 23:25:43.961140   71330 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0811 23:25:43.961148   71330 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0811 23:25:43.961153   71330 command_runner.go:130] > # changing them here.
	I0811 23:25:43.961159   71330 command_runner.go:130] > # insecure_registries = [
	I0811 23:25:43.961163   71330 command_runner.go:130] > # ]
	I0811 23:25:43.961171   71330 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0811 23:25:43.961178   71330 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0811 23:25:43.961187   71330 command_runner.go:130] > # image_volumes = "mkdir"
	I0811 23:25:43.961194   71330 command_runner.go:130] > # Temporary directory to use for storing big files
	I0811 23:25:43.961202   71330 command_runner.go:130] > # big_files_temporary_dir = ""
	I0811 23:25:43.961210   71330 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0811 23:25:43.961215   71330 command_runner.go:130] > # CNI plugins.
	I0811 23:25:43.961220   71330 command_runner.go:130] > [crio.network]
	I0811 23:25:43.961228   71330 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0811 23:25:43.961235   71330 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0811 23:25:43.961243   71330 command_runner.go:130] > # cni_default_network = ""
	I0811 23:25:43.961251   71330 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0811 23:25:43.961258   71330 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0811 23:25:43.961267   71330 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0811 23:25:43.961272   71330 command_runner.go:130] > # plugin_dirs = [
	I0811 23:25:43.961284   71330 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0811 23:25:43.961288   71330 command_runner.go:130] > # ]
	I0811 23:25:43.961296   71330 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0811 23:25:43.961301   71330 command_runner.go:130] > [crio.metrics]
	I0811 23:25:43.961307   71330 command_runner.go:130] > # Globally enable or disable metrics support.
	I0811 23:25:43.961315   71330 command_runner.go:130] > # enable_metrics = false
	I0811 23:25:43.961321   71330 command_runner.go:130] > # Specify enabled metrics collectors.
	I0811 23:25:43.961327   71330 command_runner.go:130] > # Per default all metrics are enabled.
	I0811 23:25:43.961337   71330 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0811 23:25:43.961347   71330 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0811 23:25:43.961356   71330 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0811 23:25:43.961365   71330 command_runner.go:130] > # metrics_collectors = [
	I0811 23:25:43.961370   71330 command_runner.go:130] > # 	"operations",
	I0811 23:25:43.961376   71330 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0811 23:25:43.961382   71330 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0811 23:25:43.961388   71330 command_runner.go:130] > # 	"operations_errors",
	I0811 23:25:43.961393   71330 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0811 23:25:43.961406   71330 command_runner.go:130] > # 	"image_pulls_by_name",
	I0811 23:25:43.961412   71330 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0811 23:25:43.961417   71330 command_runner.go:130] > # 	"image_pulls_failures",
	I0811 23:25:43.961426   71330 command_runner.go:130] > # 	"image_pulls_successes",
	I0811 23:25:43.961431   71330 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0811 23:25:43.961439   71330 command_runner.go:130] > # 	"image_layer_reuse",
	I0811 23:25:43.961444   71330 command_runner.go:130] > # 	"containers_oom_total",
	I0811 23:25:43.961453   71330 command_runner.go:130] > # 	"containers_oom",
	I0811 23:25:43.961459   71330 command_runner.go:130] > # 	"processes_defunct",
	I0811 23:25:43.961554   71330 command_runner.go:130] > # 	"operations_total",
	I0811 23:25:43.961564   71330 command_runner.go:130] > # 	"operations_latency_seconds",
	I0811 23:25:43.961574   71330 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0811 23:25:43.961581   71330 command_runner.go:130] > # 	"operations_errors_total",
	I0811 23:25:43.961587   71330 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0811 23:25:43.961595   71330 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0811 23:25:43.961600   71330 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0811 23:25:43.961606   71330 command_runner.go:130] > # 	"image_pulls_success_total",
	I0811 23:25:43.961614   71330 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0811 23:25:43.961619   71330 command_runner.go:130] > # 	"containers_oom_count_total",
	I0811 23:25:43.961624   71330 command_runner.go:130] > # ]
	I0811 23:25:43.961633   71330 command_runner.go:130] > # The port on which the metrics server will listen.
	I0811 23:25:43.961638   71330 command_runner.go:130] > # metrics_port = 9090
	I0811 23:25:43.961645   71330 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0811 23:25:43.961650   71330 command_runner.go:130] > # metrics_socket = ""
	I0811 23:25:43.961656   71330 command_runner.go:130] > # The certificate for the secure metrics server.
	I0811 23:25:43.961666   71330 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0811 23:25:43.961674   71330 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0811 23:25:43.961683   71330 command_runner.go:130] > # certificate on any modification event.
	I0811 23:25:43.961689   71330 command_runner.go:130] > # metrics_cert = ""
	I0811 23:25:43.961697   71330 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0811 23:25:43.961706   71330 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0811 23:25:43.961713   71330 command_runner.go:130] > # metrics_key = ""
	I0811 23:25:43.961720   71330 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0811 23:25:43.961727   71330 command_runner.go:130] > [crio.tracing]
	I0811 23:25:43.961734   71330 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0811 23:25:43.961740   71330 command_runner.go:130] > # enable_tracing = false
	I0811 23:25:43.961749   71330 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0811 23:25:43.961755   71330 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0811 23:25:43.961768   71330 command_runner.go:130] > # Number of samples to collect per million spans.
	I0811 23:25:43.961774   71330 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0811 23:25:43.961784   71330 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0811 23:25:43.961789   71330 command_runner.go:130] > [crio.stats]
	I0811 23:25:43.961799   71330 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0811 23:25:43.961806   71330 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0811 23:25:43.961812   71330 command_runner.go:130] > # stats_collection_period = 0
	I0811 23:25:43.961851   71330 command_runner.go:130] ! time="2023-08-11 23:25:43.949135388Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0811 23:25:43.961870   71330 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0811 23:25:43.961963   71330 cni.go:84] Creating CNI manager for ""
	I0811 23:25:43.961974   71330 cni.go:136] 2 nodes found, recommending kindnet
	I0811 23:25:43.961983   71330 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:25:43.962002   71330 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-891155 NodeName:multinode-891155-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:25:43.962130   71330 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-891155-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:25:43.962188   71330 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-891155-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:25:43.962261   71330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:25:43.973883   71330 command_runner.go:130] > kubeadm
	I0811 23:25:43.973902   71330 command_runner.go:130] > kubectl
	I0811 23:25:43.973907   71330 command_runner.go:130] > kubelet
	I0811 23:25:43.973924   71330 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:25:43.973991   71330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0811 23:25:43.985234   71330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0811 23:25:44.014428   71330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:25:44.039078   71330 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0811 23:25:44.044148   71330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0811 23:25:44.059180   71330 host.go:66] Checking if "multinode-891155" exists ...
	I0811 23:25:44.059454   71330 start.go:301] JoinCluster: &{Name:multinode-891155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-891155 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:25:44.059555   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0811 23:25:44.059605   71330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:25:44.060013   71330 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:25:44.079138   71330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:25:44.256337   71330 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x4jr9c.w9pmnlw4ccflqstb --discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 
	I0811 23:25:44.256375   71330 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0811 23:25:44.256407   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x4jr9c.w9pmnlw4ccflqstb --discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-891155-m02"
	I0811 23:25:44.309558   71330 command_runner.go:130] > [preflight] Running pre-flight checks
	I0811 23:25:44.353412   71330 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0811 23:25:44.353433   71330 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1040-aws
	I0811 23:25:44.353440   71330 command_runner.go:130] > OS: Linux
	I0811 23:25:44.353447   71330 command_runner.go:130] > CGROUPS_CPU: enabled
	I0811 23:25:44.353454   71330 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0811 23:25:44.353460   71330 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0811 23:25:44.353466   71330 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0811 23:25:44.353472   71330 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0811 23:25:44.353480   71330 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0811 23:25:44.353489   71330 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0811 23:25:44.353495   71330 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0811 23:25:44.353501   71330 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0811 23:25:44.466298   71330 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0811 23:25:44.466320   71330 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0811 23:25:44.497401   71330 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0811 23:25:44.497668   71330 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0811 23:25:44.497688   71330 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0811 23:25:44.595815   71330 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0811 23:25:47.613782   71330 command_runner.go:130] > This node has joined the cluster:
	I0811 23:25:47.613807   71330 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0811 23:25:47.613815   71330 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0811 23:25:47.613823   71330 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0811 23:25:47.616855   71330 command_runner.go:130] ! W0811 23:25:44.308977    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0811 23:25:47.616888   71330 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-aws\n", err: exit status 1
	I0811 23:25:47.616900   71330 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0811 23:25:47.616917   71330 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x4jr9c.w9pmnlw4ccflqstb --discovery-token-ca-cert-hash sha256:8884e7cec26767ea186e311f265f5a190c626a6e55b00221424eafcad2c1cce3 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-891155-m02": (3.360498687s)
	I0811 23:25:47.616941   71330 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0811 23:25:47.840181   71330 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0811 23:25:47.840205   71330 start.go:303] JoinCluster complete in 3.780751258s
	I0811 23:25:47.840215   71330 cni.go:84] Creating CNI manager for ""
	I0811 23:25:47.840221   71330 cni.go:136] 2 nodes found, recommending kindnet
	I0811 23:25:47.840275   71330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:25:47.844891   71330 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0811 23:25:47.844916   71330 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0811 23:25:47.844924   71330 command_runner.go:130] > Device: 36h/54d	Inode: 1306623     Links: 1
	I0811 23:25:47.844932   71330 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0811 23:25:47.844939   71330 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0811 23:25:47.844945   71330 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0811 23:25:47.844951   71330 command_runner.go:130] > Change: 2023-08-11 23:01:54.534845020 +0000
	I0811 23:25:47.844957   71330 command_runner.go:130] >  Birth: 2023-08-11 23:01:54.490844603 +0000
	I0811 23:25:47.845386   71330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:25:47.845401   71330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:25:47.868097   71330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:25:48.165314   71330 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:48.170370   71330 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0811 23:25:48.173828   71330 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0811 23:25:48.192461   71330 command_runner.go:130] > daemonset.apps/kindnet configured
	I0811 23:25:48.198296   71330 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:25:48.198619   71330 kapi.go:59] client config for multinode-891155: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:48.198944   71330 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0811 23:25:48.198964   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:48.198974   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:48.198981   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:48.201503   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:48.201531   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:48.201540   71330 round_trippers.go:580]     Audit-Id: cd0eb618-f081-4e38-bddd-ae989d977158
	I0811 23:25:48.201547   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:48.201554   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:48.201561   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:48.201568   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:48.201575   71330 round_trippers.go:580]     Content-Length: 291
	I0811 23:25:48.201586   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:48 GMT
	I0811 23:25:48.201614   71330 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"4304ada0-0fa2-48c9-be07-67a3612f0ddd","resourceVersion":"444","creationTimestamp":"2023-08-11T23:24:46Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0811 23:25:48.201705   71330 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-891155" context rescaled to 1 replicas
	I0811 23:25:48.201734   71330 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0811 23:25:48.205541   71330 out.go:177] * Verifying Kubernetes components...
	I0811 23:25:48.207235   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:48.221836   71330 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:25:48.222086   71330 kapi.go:59] client config for multinode-891155: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/multinode-891155/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:25:48.222354   71330 node_ready.go:35] waiting up to 6m0s for node "multinode-891155-m02" to be "Ready" ...
	I0811 23:25:48.222417   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:48.222422   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:48.222430   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:48.222437   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:48.225161   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:48.225185   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:48.225193   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:48.225200   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:48.225207   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:48.225213   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:48 GMT
	I0811 23:25:48.225220   71330 round_trippers.go:580]     Audit-Id: b22adc80-7a17-4a1f-8325-4c4104c5f388
	I0811 23:25:48.225226   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:48.225342   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"481","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0811 23:25:48.225728   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:48.225743   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:48.225752   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:48.225759   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:48.228279   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:48.228298   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:48.228307   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:48.228313   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:48.228320   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:48.228331   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:48.228346   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:48 GMT
	I0811 23:25:48.228353   71330 round_trippers.go:580]     Audit-Id: 2f523c83-991b-46b3-8c2f-20d7e75558a7
	I0811 23:25:48.228487   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"481","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0811 23:25:48.729555   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:48.729577   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:48.729587   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:48.729595   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:48.732007   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:48.732026   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:48.732035   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:48.732042   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:48 GMT
	I0811 23:25:48.732048   71330 round_trippers.go:580]     Audit-Id: 3d1cf77e-32eb-4b29-aedd-4b7cb41a49ca
	I0811 23:25:48.732056   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:48.732065   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:48.732072   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:48.732687   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"481","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0811 23:25:49.229231   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:49.229253   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:49.229263   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:49.229270   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:49.231750   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:49.231788   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:49.231798   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:49.231807   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:49 GMT
	I0811 23:25:49.231813   71330 round_trippers.go:580]     Audit-Id: 161478c5-4623-4702-96ed-6639fab133ad
	I0811 23:25:49.231820   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:49.231831   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:49.231838   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:49.231966   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"485","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0811 23:25:49.729677   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:49.729697   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:49.729707   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:49.729715   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:49.732270   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:49.732329   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:49.732350   71330 round_trippers.go:580]     Audit-Id: cd8d404c-105f-463b-9e9b-2cedb50dd518
	I0811 23:25:49.732371   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:49.732405   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:49.732429   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:49.732450   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:49.732463   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:49 GMT
	I0811 23:25:49.732592   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"485","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0811 23:25:50.229079   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:50.229122   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:50.229132   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:50.229139   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:50.231950   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:50.231974   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:50.231982   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:50.231991   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:50.231998   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:50 GMT
	I0811 23:25:50.232005   71330 round_trippers.go:580]     Audit-Id: 28b4fd3b-b3bf-4b25-a77f-b87e75ce8ff0
	I0811 23:25:50.232012   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:50.232019   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:50.232119   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"485","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0811 23:25:50.232504   71330 node_ready.go:58] node "multinode-891155-m02" has status "Ready":"False"
	I0811 23:25:50.729257   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:50.729281   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:50.729291   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:50.729299   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:50.731901   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:50.731926   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:50.731935   71330 round_trippers.go:580]     Audit-Id: 1b4b4e4f-8279-4307-a9e2-057e94e03b61
	I0811 23:25:50.731942   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:50.731949   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:50.731956   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:50.731962   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:50.731975   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:50 GMT
	I0811 23:25:50.732223   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"485","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0811 23:25:51.229795   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:51.229817   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.229827   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.229834   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.232343   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.232364   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.232372   71330 round_trippers.go:580]     Audit-Id: a80d3b74-18bd-4383-be91-ba050ad9b225
	I0811 23:25:51.232380   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.232386   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.232393   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.232400   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.232407   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.232540   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"485","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0811 23:25:51.729022   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:51.729054   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.729076   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.729100   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.731661   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.731683   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.731692   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.731699   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.731705   71330 round_trippers.go:580]     Audit-Id: 208bb0b1-f39c-424f-b6d1-ab77e2937c37
	I0811 23:25:51.731712   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.731720   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.731726   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.731841   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"502","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0811 23:25:51.732196   71330 node_ready.go:49] node "multinode-891155-m02" has status "Ready":"True"
	I0811 23:25:51.732206   71330 node_ready.go:38] duration metric: took 3.509841958s waiting for node "multinode-891155-m02" to be "Ready" ...
	I0811 23:25:51.732214   71330 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:51.732272   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0811 23:25:51.732278   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.732286   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.732292   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.735872   71330 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0811 23:25:51.735897   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.735905   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.735912   71330 round_trippers.go:580]     Audit-Id: 3925e55e-0905-4bb1-b73b-1e61e94eea39
	I0811 23:25:51.735920   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.735927   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.735934   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.735941   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.736659   71330 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"440","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0811 23:25:51.739583   71330 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.739678   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-2zwtc
	I0811 23:25:51.739690   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.739699   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.739707   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.742284   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.742306   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.742315   71330 round_trippers.go:580]     Audit-Id: 164dcc46-5f3c-4f76-881c-ce529bd2e8f9
	I0811 23:25:51.742323   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.742330   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.742344   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.742357   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.742364   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.742469   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-2zwtc","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"bdd66be6-b910-4f6f-8679-d0b0009e0cf4","resourceVersion":"440","creationTimestamp":"2023-08-11T23:24:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"d40c9552-811b-4579-860f-cb936e801f97","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d40c9552-811b-4579-860f-cb936e801f97\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0811 23:25:51.742985   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:51.742997   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.743006   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.743015   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.745406   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.745471   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.745494   71330 round_trippers.go:580]     Audit-Id: 1d7f94e4-fd11-4cba-ac7b-86b38db0b00d
	I0811 23:25:51.745560   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.745574   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.745582   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.745588   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.745595   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.745713   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:51.746095   71330 pod_ready.go:92] pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:51.746113   71330 pod_ready.go:81] duration metric: took 6.49428ms waiting for pod "coredns-5d78c9869d-2zwtc" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.746123   71330 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.746181   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-891155
	I0811 23:25:51.746191   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.746199   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.746206   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.748555   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.748577   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.748586   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.748593   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.748599   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.748606   71330 round_trippers.go:580]     Audit-Id: 0bc2ecf1-f7b6-464e-b977-98e2558b149f
	I0811 23:25:51.748616   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.748623   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.748725   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-891155","namespace":"kube-system","uid":"3f2510f6-83c6-4da5-b61c-e95f02efe646","resourceVersion":"293","creationTimestamp":"2023-08-11T23:24:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac964265a61a9bdbd78ff9211e52f7d4","kubernetes.io/config.mirror":"ac964265a61a9bdbd78ff9211e52f7d4","kubernetes.io/config.seen":"2023-08-11T23:24:38.768275404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0811 23:25:51.749192   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:51.749211   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.749220   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.749227   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.751491   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.751511   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.751520   71330 round_trippers.go:580]     Audit-Id: a8f4fac0-0a13-455c-9c17-ed4700deedb6
	I0811 23:25:51.751526   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.751533   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.751539   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.751546   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.751553   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.751713   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:51.752113   71330 pod_ready.go:92] pod "etcd-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:51.752129   71330 pod_ready.go:81] duration metric: took 5.999389ms waiting for pod "etcd-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.752146   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.752202   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-891155
	I0811 23:25:51.752211   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.752219   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.752226   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.754540   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.754564   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.754573   71330 round_trippers.go:580]     Audit-Id: 245f100d-0591-4775-aae4-8dd2b02113a1
	I0811 23:25:51.754580   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.754588   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.754595   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.754608   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.754615   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.754893   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-891155","namespace":"kube-system","uid":"dfd78e52-0afb-4e5b-95e3-875b2bcee96a","resourceVersion":"291","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e356ac5d6e97af27265e5b5cb0b92081","kubernetes.io/config.mirror":"e356ac5d6e97af27265e5b5cb0b92081","kubernetes.io/config.seen":"2023-08-11T23:24:46.098345297Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0811 23:25:51.755423   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:51.755432   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.755440   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.755450   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.757816   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.757838   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.757848   71330 round_trippers.go:580]     Audit-Id: 8c71b83a-cb28-43b0-9d35-6379b7358193
	I0811 23:25:51.757855   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.757863   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.757873   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.757882   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.757889   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.758024   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:51.758416   71330 pod_ready.go:92] pod "kube-apiserver-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:51.758433   71330 pod_ready.go:81] duration metric: took 6.27753ms waiting for pod "kube-apiserver-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.758446   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.758505   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-891155
	I0811 23:25:51.758514   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.758522   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.758530   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.760925   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.760946   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.760955   71330 round_trippers.go:580]     Audit-Id: 64b3bf25-cde8-4757-b7bc-8a6900bcc2ff
	I0811 23:25:51.760962   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.760968   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.760975   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.760986   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.760993   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.761209   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-891155","namespace":"kube-system","uid":"c685b575-39b4-4046-bb4d-eae4f5a3ce41","resourceVersion":"295","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9dd1280a21dd2149b12943bab840e8ed","kubernetes.io/config.mirror":"9dd1280a21dd2149b12943bab840e8ed","kubernetes.io/config.seen":"2023-08-11T23:24:46.098346733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0811 23:25:51.761715   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:51.761732   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.761740   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.761747   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.763921   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.763942   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.763950   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.763958   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.763965   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.763974   71330 round_trippers.go:580]     Audit-Id: cc83e0ed-6d00-4eda-8efe-faf69ffff582
	I0811 23:25:51.763984   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.763991   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.764151   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:51.764527   71330 pod_ready.go:92] pod "kube-controller-manager-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:51.764546   71330 pod_ready.go:81] duration metric: took 6.08803ms waiting for pod "kube-controller-manager-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.764558   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h2bt7" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:51.929920   71330 request.go:628] Waited for 165.278996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h2bt7
	I0811 23:25:51.929973   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h2bt7
	I0811 23:25:51.929978   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:51.929987   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:51.930000   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:51.932670   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:51.932751   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:51.932767   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:51.932775   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:51.932782   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:51.932789   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:51.932796   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:51 GMT
	I0811 23:25:51.932806   71330 round_trippers.go:580]     Audit-Id: 3ce1d03c-58cc-4ee8-998e-5685371a3b48
	I0811 23:25:51.932917   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h2bt7","generateName":"kube-proxy-","namespace":"kube-system","uid":"0088ca20-d7c2-499c-8295-4cb3341df94e","resourceVersion":"406","creationTimestamp":"2023-08-11T23:25:00Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8f2791eb-4070-4661-bec7-2fb7609006cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f2791eb-4070-4661-bec7-2fb7609006cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0811 23:25:52.129745   71330 request.go:628] Waited for 196.331116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:52.129797   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:52.129807   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:52.129818   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:52.129828   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:52.132436   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:52.132469   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:52.132477   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:52.132484   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:52 GMT
	I0811 23:25:52.132491   71330 round_trippers.go:580]     Audit-Id: 31c55463-94c3-4030-8100-87245dc4a031
	I0811 23:25:52.132497   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:52.132504   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:52.132515   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:52.132642   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:52.133046   71330 pod_ready.go:92] pod "kube-proxy-h2bt7" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:52.133063   71330 pod_ready.go:81] duration metric: took 368.496175ms waiting for pod "kube-proxy-h2bt7" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:52.133074   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hgj85" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:52.329483   71330 request.go:628] Waited for 196.322846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgj85
	I0811 23:25:52.329599   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgj85
	I0811 23:25:52.329612   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:52.329621   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:52.329629   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:52.332422   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:52.332484   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:52.332507   71330 round_trippers.go:580]     Audit-Id: cd58dc57-8296-461f-9c41-347dd1643515
	I0811 23:25:52.332526   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:52.332542   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:52.332557   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:52.332569   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:52.332588   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:52 GMT
	I0811 23:25:52.332699   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgj85","generateName":"kube-proxy-","namespace":"kube-system","uid":"7f47e394-591f-4b54-98e3-dcda7bcf8ed8","resourceVersion":"496","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8f2791eb-4070-4661-bec7-2fb7609006cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f2791eb-4070-4661-bec7-2fb7609006cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0811 23:25:52.529536   71330 request.go:628] Waited for 196.331223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:52.530068   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155-m02
	I0811 23:25:52.530084   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:52.530096   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:52.530107   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:52.532878   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:52.532967   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:52.532993   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:52.533002   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:52.533009   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:52 GMT
	I0811 23:25:52.533020   71330 round_trippers.go:580]     Audit-Id: 674846f3-b906-4f9b-8a36-24b011547d41
	I0811 23:25:52.533037   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:52.533049   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:52.533206   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155-m02","uid":"a1b3af37-724f-4c22-b824-46637bec5913","resourceVersion":"502","creationTimestamp":"2023-08-11T23:25:47Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:25:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0811 23:25:52.533636   71330 pod_ready.go:92] pod "kube-proxy-hgj85" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:52.533655   71330 pod_ready.go:81] duration metric: took 400.555069ms waiting for pod "kube-proxy-hgj85" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:52.533670   71330 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:52.730020   71330 request.go:628] Waited for 196.282213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-891155
	I0811 23:25:52.730094   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-891155
	I0811 23:25:52.730140   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:52.730163   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:52.730172   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:52.732706   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:52.732738   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:52.732747   71330 round_trippers.go:580]     Audit-Id: 4c0d5f97-f4d9-4af0-b575-025f74e0aef0
	I0811 23:25:52.732754   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:52.732762   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:52.732768   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:52.732775   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:52.732817   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:52 GMT
	I0811 23:25:52.733158   71330 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-891155","namespace":"kube-system","uid":"4d802e08-54b8-4829-8dd3-a68522e6a129","resourceVersion":"322","creationTimestamp":"2023-08-11T23:24:46Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d9309c1ccc242467f2edd96945b86842","kubernetes.io/config.mirror":"d9309c1ccc242467f2edd96945b86842","kubernetes.io/config.seen":"2023-08-11T23:24:46.098349104Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-11T23:24:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0811 23:25:52.930005   71330 request.go:628] Waited for 196.346329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:52.930079   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-891155
	I0811 23:25:52.930092   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:52.930103   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:52.930111   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:52.932618   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:52.932644   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:52.932653   71330 round_trippers.go:580]     Audit-Id: 09afaa77-014d-4b96-a146-6a7d3eefb2cb
	I0811 23:25:52.932661   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:52.932668   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:52.932678   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:52.932690   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:52.932699   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:52 GMT
	I0811 23:25:52.932797   71330 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-11T23:24:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0811 23:25:52.933209   71330 pod_ready.go:92] pod "kube-scheduler-multinode-891155" in "kube-system" namespace has status "Ready":"True"
	I0811 23:25:52.933226   71330 pod_ready.go:81] duration metric: took 399.545916ms waiting for pod "kube-scheduler-multinode-891155" in "kube-system" namespace to be "Ready" ...
	I0811 23:25:52.933238   71330 pod_ready.go:38] duration metric: took 1.201014574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:25:52.933254   71330 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:25:52.933311   71330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:25:52.946871   71330 system_svc.go:56] duration metric: took 13.607086ms WaitForService to wait for kubelet.
	I0811 23:25:52.946897   71330 kubeadm.go:581] duration metric: took 4.745136304s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:25:52.946918   71330 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:25:53.129157   71330 request.go:628] Waited for 182.171958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0811 23:25:53.129207   71330 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0811 23:25:53.129219   71330 round_trippers.go:469] Request Headers:
	I0811 23:25:53.129228   71330 round_trippers.go:473]     Accept: application/json, */*
	I0811 23:25:53.129236   71330 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0811 23:25:53.131955   71330 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0811 23:25:53.132014   71330 round_trippers.go:577] Response Headers:
	I0811 23:25:53.132029   71330 round_trippers.go:580]     Audit-Id: 3ab37f8d-2f42-47d4-8220-fd8c9e3fd728
	I0811 23:25:53.132037   71330 round_trippers.go:580]     Cache-Control: no-cache, private
	I0811 23:25:53.132044   71330 round_trippers.go:580]     Content-Type: application/json
	I0811 23:25:53.132051   71330 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c5a17959-6b73-4eee-8f87-a61bba8a4fce
	I0811 23:25:53.132058   71330 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 265d4527-c5eb-4b30-b3a5-bb4b63ea2be3
	I0811 23:25:53.132067   71330 round_trippers.go:580]     Date: Fri, 11 Aug 2023 23:25:53 GMT
	I0811 23:25:53.132305   71330 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"503"},"items":[{"metadata":{"name":"multinode-891155","uid":"07bd670b-9cbc-40c3-8fff-cad950399d5b","resourceVersion":"421","creationTimestamp":"2023-08-11T23:24:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-891155","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0bff008270ec17d4e0c2c90a14e18ac31a0e01f5","minikube.k8s.io/name":"multinode-891155","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_11T23_24_47_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0811 23:25:53.132936   71330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:25:53.132958   71330 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:53.132969   71330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:25:53.132976   71330 node_conditions.go:123] node cpu capacity is 2
	I0811 23:25:53.132980   71330 node_conditions.go:105] duration metric: took 186.058196ms to run NodePressure ...
	I0811 23:25:53.132994   71330 start.go:228] waiting for startup goroutines ...
	I0811 23:25:53.133018   71330 start.go:242] writing updated cluster config ...
	I0811 23:25:53.133361   71330 ssh_runner.go:195] Run: rm -f paused
	I0811 23:25:53.196343   71330 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0811 23:25:53.198472   71330 out.go:177] * Done! kubectl is now configured to use "multinode-891155" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 11 23:25:31 multinode-891155 crio[899]: time="2023-08-11 23:25:31.780949011Z" level=info msg="Starting container: 1220e9fd46bcb83854289b6bccc35d8825e048c8670c532e525349cd813465a3" id=15e2f906-004e-4195-b423-ec429c164563 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:25:31 multinode-891155 crio[899]: time="2023-08-11 23:25:31.783544081Z" level=info msg="Created container 354688a6f093044fd5cae918c6664f43cf72239248a8b0eb0bdb0ea87cd2f5dc: kube-system/coredns-5d78c9869d-2zwtc/coredns" id=cbd1d535-043d-49cf-9e69-a89fff56266e name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:25:31 multinode-891155 crio[899]: time="2023-08-11 23:25:31.784003041Z" level=info msg="Starting container: 354688a6f093044fd5cae918c6664f43cf72239248a8b0eb0bdb0ea87cd2f5dc" id=eb865329-61b2-4ca8-93ef-b56d03051ab5 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:25:31 multinode-891155 crio[899]: time="2023-08-11 23:25:31.792293093Z" level=info msg="Started container" PID=1930 containerID=1220e9fd46bcb83854289b6bccc35d8825e048c8670c532e525349cd813465a3 description=kube-system/storage-provisioner/storage-provisioner id=15e2f906-004e-4195-b423-ec429c164563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0df4ccfb369c76c5e9e2d77b977efd28fc4e04e5b0f1cea87a30e40245ac11af
	Aug 11 23:25:31 multinode-891155 crio[899]: time="2023-08-11 23:25:31.809525899Z" level=info msg="Started container" PID=1940 containerID=354688a6f093044fd5cae918c6664f43cf72239248a8b0eb0bdb0ea87cd2f5dc description=kube-system/coredns-5d78c9869d-2zwtc/coredns id=eb865329-61b2-4ca8-93ef-b56d03051ab5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bcecae77374b347572958ba38859b99c0f64f6ce99be6eefd70ebeeac192cc5
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.472186836Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-qc8x6/POD" id=96c0bf1c-7e02-4a38-9dd6-60bcf521a9fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.472249983Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.490071023Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-qc8x6 Namespace:default ID:2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808 UID:91f27e75-cda6-4e53-b1e7-f75209983e0f NetNS:/var/run/netns/80a46a07-0f9b-4185-affa-9fa5082fa599 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.490116578Z" level=info msg="Adding pod default_busybox-67b7f59bb-qc8x6 to CNI network \"kindnet\" (type=ptp)"
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.503442569Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-qc8x6 Namespace:default ID:2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808 UID:91f27e75-cda6-4e53-b1e7-f75209983e0f NetNS:/var/run/netns/80a46a07-0f9b-4185-affa-9fa5082fa599 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.503604704Z" level=info msg="Checking pod default_busybox-67b7f59bb-qc8x6 for CNI network kindnet (type=ptp)"
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.510597427Z" level=info msg="Ran pod sandbox 2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808 with infra container: default/busybox-67b7f59bb-qc8x6/POD" id=96c0bf1c-7e02-4a38-9dd6-60bcf521a9fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.517581846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8eeef30f-4b73-4268-8075-627265464728 name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.517835872Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=8eeef30f-4b73-4268-8075-627265464728 name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.520675523Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=32fa5a2b-eb55-4131-9752-dc470005f867 name=/runtime.v1.ImageService/PullImage
	Aug 11 23:25:54 multinode-891155 crio[899]: time="2023-08-11 23:25:54.521961570Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 11 23:25:55 multinode-891155 crio[899]: time="2023-08-11 23:25:55.272161961Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.584531709Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=32fa5a2b-eb55-4131-9752-dc470005f867 name=/runtime.v1.ImageService/PullImage
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.586051928Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=38a8e938-4d03-41d8-a1d5-2715f50232b1 name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.586704799Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=38a8e938-4d03-41d8-a1d5-2715f50232b1 name=/runtime.v1.ImageService/ImageStatus
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.587641842Z" level=info msg="Creating container: default/busybox-67b7f59bb-qc8x6/busybox" id=6362afc4-3ba7-4b97-a038-9c95ebf9f22b name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.587763057Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.678242458Z" level=info msg="Created container 45c9e6e65c274d6bc75faba3a0d3581cf56c5c55a482109aff5a26ec96af273c: default/busybox-67b7f59bb-qc8x6/busybox" id=6362afc4-3ba7-4b97-a038-9c95ebf9f22b name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.680771466Z" level=info msg="Starting container: 45c9e6e65c274d6bc75faba3a0d3581cf56c5c55a482109aff5a26ec96af273c" id=c4e7a9aa-5573-4469-bbd6-341e9f70f0a3 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:25:56 multinode-891155 crio[899]: time="2023-08-11 23:25:56.690585100Z" level=info msg="Started container" PID=2076 containerID=45c9e6e65c274d6bc75faba3a0d3581cf56c5c55a482109aff5a26ec96af273c description=default/busybox-67b7f59bb-qc8x6/busybox id=c4e7a9aa-5573-4469-bbd6-341e9f70f0a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	45c9e6e65c274       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   2d5dd75b01b5b       busybox-67b7f59bb-qc8x6
	354688a6f0930       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      30 seconds ago       Running             coredns                   0                   7bcecae77374b       coredns-5d78c9869d-2zwtc
	1220e9fd46bcb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      30 seconds ago       Running             storage-provisioner       0                   0df4ccfb369c7       storage-provisioner
	0bb63ad89d500       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                      About a minute ago   Running             kube-proxy                0                   c813adbb2f62f       kube-proxy-h2bt7
	6a496e62c10da       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   82a16588ac610       kindnet-jjmpp
	6309b2e0d3037       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388                                      About a minute ago   Running             kube-apiserver            0                   4754a98464d3e       kube-apiserver-multinode-891155
	5374f9bcc0fda       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                      About a minute ago   Running             kube-scheduler            0                   94320c4b9404a       kube-scheduler-multinode-891155
	9a758951bcd1f       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   8bfc8f06e9996       etcd-multinode-891155
	ff1df0f989a33       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                      About a minute ago   Running             kube-controller-manager   0                   d2e455e5352c7       kube-controller-manager-multinode-891155
	
	* 
	* ==> coredns [354688a6f093044fd5cae918c6664f43cf72239248a8b0eb0bdb0ea87cd2f5dc] <==
	* [INFO] 10.244.0.3:44057 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100323s
	[INFO] 10.244.1.2:37682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165479s
	[INFO] 10.244.1.2:58860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001047241s
	[INFO] 10.244.1.2:51763 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098485s
	[INFO] 10.244.1.2:37417 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063696s
	[INFO] 10.244.1.2:36817 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001240527s
	[INFO] 10.244.1.2:43271 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072795s
	[INFO] 10.244.1.2:56041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066321s
	[INFO] 10.244.1.2:36726 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071646s
	[INFO] 10.244.0.3:44820 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127949s
	[INFO] 10.244.0.3:44677 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105812s
	[INFO] 10.244.0.3:54265 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079532s
	[INFO] 10.244.0.3:48017 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055228s
	[INFO] 10.244.1.2:40392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145361s
	[INFO] 10.244.1.2:45386 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078645s
	[INFO] 10.244.1.2:42021 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082263s
	[INFO] 10.244.1.2:44784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007899s
	[INFO] 10.244.0.3:51812 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136089s
	[INFO] 10.244.0.3:48070 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092274s
	[INFO] 10.244.0.3:39168 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127219s
	[INFO] 10.244.0.3:56428 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094867s
	[INFO] 10.244.1.2:48875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183661s
	[INFO] 10.244.1.2:46365 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009357s
	[INFO] 10.244.1.2:36128 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083642s
	[INFO] 10.244.1.2:42534 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071063s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-891155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-891155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=multinode-891155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_24_47_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:24:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-891155
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:25:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:25:31 +0000   Fri, 11 Aug 2023 23:24:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:25:31 +0000   Fri, 11 Aug 2023 23:24:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:25:31 +0000   Fri, 11 Aug 2023 23:24:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:25:31 +0000   Fri, 11 Aug 2023 23:25:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-891155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 5db7d89918e943dea94d2aa9e3016ad8
	  System UUID:                c53d9adf-9ad7-462d-9756-b58c3a0d3e6c
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-qc8x6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5d78c9869d-2zwtc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     63s
	  kube-system                 etcd-multinode-891155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-jjmpp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      63s
	  kube-system                 kube-apiserver-multinode-891155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-multinode-891155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-h2bt7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-891155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-891155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-891155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x8 over 84s)  kubelet          Node multinode-891155 status is now: NodeHasSufficientPID
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node multinode-891155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node multinode-891155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node multinode-891155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           64s                node-controller  Node multinode-891155 event: Registered Node multinode-891155 in Controller
	  Normal  NodeReady                31s                kubelet          Node multinode-891155 status is now: NodeReady
	
	
	Name:               multinode-891155-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-891155-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:25:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-891155-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:25:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:25:51 +0000   Fri, 11 Aug 2023 23:25:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:25:51 +0000   Fri, 11 Aug 2023 23:25:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:25:51 +0000   Fri, 11 Aug 2023 23:25:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:25:51 +0000   Fri, 11 Aug 2023 23:25:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-891155-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 79e9afe3b99c40d08c00d868fa388ef2
	  System UUID:                20aa0cc0-7bf8-4c64-a0aa-9e0d14a16bc5
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-xv9cw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-ddztz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15s
	  kube-system                 kube-proxy-hgj85           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  15s (x5 over 17s)  kubelet          Node multinode-891155-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x5 over 17s)  kubelet          Node multinode-891155-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x5 over 17s)  kubelet          Node multinode-891155-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node multinode-891155-m02 event: Registered Node multinode-891155-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-891155-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000754] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001054] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +0.003010] FS-Cache: Duplicate cookie detected
	[  +0.000685] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000004a8382c
	[  +0.001063] FS-Cache: O-key=[8] '805b3b0000000000'
	[  +0.000759] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=0000000045141c8c
	[  +0.001051] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +2.763262] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=000000003c07f4d4
	[  +0.001127] FS-Cache: O-key=[8] '7f5b3b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=000000006a6921aa
	[  +0.001095] FS-Cache: N-key=[8] '7f5b3b0000000000'
	[  +0.384460] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000084bb64d5
	[  +0.001049] FS-Cache: O-key=[8] '8a5b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001049] FS-Cache: N-key=[8] '8a5b3b0000000000'
	[Aug11 23:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [9a758951bcd1f52962698d246c75200a78b78991d59a478f6ea1672667ca7db3] <==
	* {"level":"info","ts":"2023-08-11T23:24:39.653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-08-11T23:24:39.653Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-11T23:24:39.666Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-11T23:24:39.667Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-11T23:24:39.667Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-11T23:24:39.667Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-11T23:24:39.667Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:39.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-11T23:24:39.737Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:24:39.738Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-11T23:24:39.738Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:39.737Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-891155 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:24:39.738Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:24:39.740Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:24:39.740Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:39.740Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:39.740Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:24:39.751Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:24:39.752Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:26:02 up  1:08,  0 users,  load average: 1.96, 1.91, 1.40
	Linux multinode-891155 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [6a496e62c10daedd2fd057b3f37add37c7a6a23cbba909b30ee5b12812744584] <==
	* I0811 23:25:00.635416       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0811 23:25:00.635696       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0811 23:25:00.635872       1 main.go:116] setting mtu 1500 for CNI 
	I0811 23:25:00.635933       1 main.go:146] kindnetd IP family: "ipv4"
	I0811 23:25:00.635969       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0811 23:25:30.856768       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0811 23:25:30.870443       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0811 23:25:30.870586       1 main.go:227] handling current node
	I0811 23:25:40.886278       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0811 23:25:40.886390       1 main.go:227] handling current node
	I0811 23:25:50.899014       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0811 23:25:50.899042       1 main.go:227] handling current node
	I0811 23:25:50.899053       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0811 23:25:50.899058       1 main.go:250] Node multinode-891155-m02 has CIDR [10.244.1.0/24] 
	I0811 23:25:50.899191       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0811 23:26:00.909633       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0811 23:26:00.909747       1 main.go:227] handling current node
	I0811 23:26:00.909784       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0811 23:26:00.909815       1 main.go:250] Node multinode-891155-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6309b2e0d3037f364555db6eee02a9079d861250f3bbea918e7e587d47f22954] <==
	* I0811 23:24:43.252773       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 23:24:43.275178       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0811 23:24:43.275277       1 aggregator.go:152] initial CRD sync complete...
	I0811 23:24:43.275330       1 autoregister_controller.go:141] Starting autoregister controller
	I0811 23:24:43.275362       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0811 23:24:43.275392       1 cache.go:39] Caches are synced for autoregister controller
	I0811 23:24:43.457344       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 23:24:43.573608       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 23:24:43.942391       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0811 23:24:43.949254       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0811 23:24:43.949278       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0811 23:24:44.485046       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 23:24:44.529119       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0811 23:24:44.608573       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0811 23:24:44.614687       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0811 23:24:44.615915       1 controller.go:624] quota admission added evaluator for: endpoints
	I0811 23:24:44.620897       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0811 23:24:45.051765       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0811 23:24:46.028667       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0811 23:24:46.042604       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0811 23:24:46.056810       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0811 23:24:59.715648       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0811 23:24:59.900041       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0811 23:25:57.393849       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4009928840), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400a39e460), ResponseWriter:(*httpsnoop.rw)(0x400a39e460), Flusher:(*httpsnoop.rw)(0x400a39e460), CloseNotifier:(*httpsnoop.rw)(0x400a39e460), Pusher:(*httpsnoop.rw)(0x400a39e460)}}, encoder:(*versioning.codec)(0x400e620be0), memAllocator:(*runtime.Allocator)(0x400accceb8)})
	E0811 23:25:58.709344       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:60292: write: broken pipe
	
	* 
	* ==> kube-controller-manager [ff1df0f989a331d72cd118117a5fe14dedc07a73f64d61a32a5fd565bb779d65] <==
	* I0811 23:24:58.856960       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:24:58.864792       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0811 23:24:58.866914       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:24:58.949463       1 shared_informer.go:318] Caches are synced for persistent volume
	I0811 23:24:59.346410       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:24:59.346535       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0811 23:24:59.371026       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:24:59.769523       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0811 23:25:00.125320       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h2bt7"
	I0811 23:25:00.125959       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jjmpp"
	I0811 23:25:00.126068       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0811 23:25:00.126132       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-2zwtc"
	I0811 23:25:00.219536       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-gz5jd"
	I0811 23:25:00.352304       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-gz5jd"
	I0811 23:25:33.730662       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0811 23:25:47.405189       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-891155-m02\" does not exist"
	I0811 23:25:47.416905       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-891155-m02" podCIDRs=[10.244.1.0/24]
	I0811 23:25:47.427505       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hgj85"
	I0811 23:25:47.430069       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ddztz"
	I0811 23:25:48.732144       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-891155-m02"
	I0811 23:25:48.732294       1 event.go:307] "Event occurred" object="multinode-891155-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-891155-m02 event: Registered Node multinode-891155-m02 in Controller"
	W0811 23:25:51.399565       1 topologycache.go:232] Can't get CPU or zone information for multinode-891155-m02 node
	I0811 23:25:54.096140       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0811 23:25:54.119237       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-xv9cw"
	I0811 23:25:54.142540       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-qc8x6"
	
	* 
	* ==> kube-proxy [0bb63ad89d5002dfc3e49d9a13b86b071c0f94df113ee639f7937e8bb511b93a] <==
	* I0811 23:25:00.726561       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0811 23:25:00.726670       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0811 23:25:00.726689       1 server_others.go:554] "Using iptables proxy"
	I0811 23:25:00.755613       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:25:00.755715       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0811 23:25:00.755804       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0811 23:25:00.755845       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0811 23:25:00.755939       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:25:00.756515       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:25:00.756766       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:25:00.757620       1 config.go:188] "Starting service config controller"
	I0811 23:25:00.757765       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:25:00.757999       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:25:00.758053       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:25:00.758671       1 config.go:315] "Starting node config controller"
	I0811 23:25:00.760425       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:25:00.858962       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0811 23:25:00.858965       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:25:00.860527       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [5374f9bcc0fdab05ff9b07f80e732e0fc2d7481ca1d2970a93a23b80992a6ccf] <==
	* W0811 23:24:43.221003       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 23:24:43.221681       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0811 23:24:43.221040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 23:24:43.221696       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0811 23:24:43.221144       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0811 23:24:43.221709       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0811 23:24:43.220906       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 23:24:43.221793       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0811 23:24:44.074189       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0811 23:24:44.074407       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0811 23:24:44.099400       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0811 23:24:44.099519       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0811 23:24:44.101049       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0811 23:24:44.101169       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0811 23:24:44.117118       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0811 23:24:44.117218       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0811 23:24:44.208989       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0811 23:24:44.209104       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0811 23:24:44.257063       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0811 23:24:44.257123       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0811 23:24:44.300953       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0811 23:24:44.301135       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0811 23:24:44.376889       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0811 23:24:44.376921       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0811 23:24:47.111292       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.128650    1387 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.216807    1387 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232343    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cac9d75-0974-4abc-975e-b7a786d44c90-lib-modules\") pod \"kindnet-jjmpp\" (UID: \"9cac9d75-0974-4abc-975e-b7a786d44c90\") " pod="kube-system/kindnet-jjmpp"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232413    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0088ca20-d7c2-499c-8295-4cb3341df94e-xtables-lock\") pod \"kube-proxy-h2bt7\" (UID: \"0088ca20-d7c2-499c-8295-4cb3341df94e\") " pod="kube-system/kube-proxy-h2bt7"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232448    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0088ca20-d7c2-499c-8295-4cb3341df94e-kube-proxy\") pod \"kube-proxy-h2bt7\" (UID: \"0088ca20-d7c2-499c-8295-4cb3341df94e\") " pod="kube-system/kube-proxy-h2bt7"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232477    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6l8p\" (UniqueName: \"kubernetes.io/projected/0088ca20-d7c2-499c-8295-4cb3341df94e-kube-api-access-d6l8p\") pod \"kube-proxy-h2bt7\" (UID: \"0088ca20-d7c2-499c-8295-4cb3341df94e\") " pod="kube-system/kube-proxy-h2bt7"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232505    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0088ca20-d7c2-499c-8295-4cb3341df94e-lib-modules\") pod \"kube-proxy-h2bt7\" (UID: \"0088ca20-d7c2-499c-8295-4cb3341df94e\") " pod="kube-system/kube-proxy-h2bt7"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232537    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9cac9d75-0974-4abc-975e-b7a786d44c90-cni-cfg\") pod \"kindnet-jjmpp\" (UID: \"9cac9d75-0974-4abc-975e-b7a786d44c90\") " pod="kube-system/kindnet-jjmpp"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232574    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxltg\" (UniqueName: \"kubernetes.io/projected/9cac9d75-0974-4abc-975e-b7a786d44c90-kube-api-access-sxltg\") pod \"kindnet-jjmpp\" (UID: \"9cac9d75-0974-4abc-975e-b7a786d44c90\") " pod="kube-system/kindnet-jjmpp"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: I0811 23:25:00.232600    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cac9d75-0974-4abc-975e-b7a786d44c90-xtables-lock\") pod \"kindnet-jjmpp\" (UID: \"9cac9d75-0974-4abc-975e-b7a786d44c90\") " pod="kube-system/kindnet-jjmpp"
	Aug 11 23:25:00 multinode-891155 kubelet[1387]: W0811 23:25:00.466021    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/crio-82a16588ac610bb59ab47cab8febc882d68ebd21cdd3ba841e731ebc65795bb0 WatchSource:0}: Error finding container 82a16588ac610bb59ab47cab8febc882d68ebd21cdd3ba841e731ebc65795bb0: Status 404 returned error can't find the container with id 82a16588ac610bb59ab47cab8febc882d68ebd21cdd3ba841e731ebc65795bb0
	Aug 11 23:25:01 multinode-891155 kubelet[1387]: I0811 23:25:01.304978    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-jjmpp" podStartSLOduration=2.304933093 podCreationTimestamp="2023-08-11 23:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-11 23:25:01.30471956 +0000 UTC m=+15.306827627" watchObservedRunningTime="2023-08-11 23:25:01.304933093 +0000 UTC m=+15.307041160"
	Aug 11 23:25:01 multinode-891155 kubelet[1387]: I0811 23:25:01.305146    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-h2bt7" podStartSLOduration=1.305125202 podCreationTimestamp="2023-08-11 23:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-11 23:25:01.285904916 +0000 UTC m=+15.288012967" watchObservedRunningTime="2023-08-11 23:25:01.305125202 +0000 UTC m=+15.307233286"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.307091    1387 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.334828    1387 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.340354    1387 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.361636    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/65117b46-887a-42ef-8c9f-bb2f789898e5-tmp\") pod \"storage-provisioner\" (UID: \"65117b46-887a-42ef-8c9f-bb2f789898e5\") " pod="kube-system/storage-provisioner"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.361724    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdd66be6-b910-4f6f-8679-d0b0009e0cf4-config-volume\") pod \"coredns-5d78c9869d-2zwtc\" (UID: \"bdd66be6-b910-4f6f-8679-d0b0009e0cf4\") " pod="kube-system/coredns-5d78c9869d-2zwtc"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.361753    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c272k\" (UniqueName: \"kubernetes.io/projected/bdd66be6-b910-4f6f-8679-d0b0009e0cf4-kube-api-access-c272k\") pod \"coredns-5d78c9869d-2zwtc\" (UID: \"bdd66be6-b910-4f6f-8679-d0b0009e0cf4\") " pod="kube-system/coredns-5d78c9869d-2zwtc"
	Aug 11 23:25:31 multinode-891155 kubelet[1387]: I0811 23:25:31.361791    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq4d\" (UniqueName: \"kubernetes.io/projected/65117b46-887a-42ef-8c9f-bb2f789898e5-kube-api-access-xmq4d\") pod \"storage-provisioner\" (UID: \"65117b46-887a-42ef-8c9f-bb2f789898e5\") " pod="kube-system/storage-provisioner"
	Aug 11 23:25:32 multinode-891155 kubelet[1387]: I0811 23:25:32.356440    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.35639662 podCreationTimestamp="2023-08-11 23:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-11 23:25:32.354635374 +0000 UTC m=+46.356743426" watchObservedRunningTime="2023-08-11 23:25:32.35639662 +0000 UTC m=+46.358504671"
	Aug 11 23:25:32 multinode-891155 kubelet[1387]: I0811 23:25:32.356512    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2zwtc" podStartSLOduration=33.356496338 podCreationTimestamp="2023-08-11 23:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-11 23:25:32.337130573 +0000 UTC m=+46.339238632" watchObservedRunningTime="2023-08-11 23:25:32.356496338 +0000 UTC m=+46.358604397"
	Aug 11 23:25:54 multinode-891155 kubelet[1387]: I0811 23:25:54.170285    1387 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:25:54 multinode-891155 kubelet[1387]: I0811 23:25:54.203790    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86r78\" (UniqueName: \"kubernetes.io/projected/91f27e75-cda6-4e53-b1e7-f75209983e0f-kube-api-access-86r78\") pod \"busybox-67b7f59bb-qc8x6\" (UID: \"91f27e75-cda6-4e53-b1e7-f75209983e0f\") " pod="default/busybox-67b7f59bb-qc8x6"
	Aug 11 23:25:54 multinode-891155 kubelet[1387]: W0811 23:25:54.508268    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/crio-2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808 WatchSource:0}: Error finding container 2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808: Status 404 returned error can't find the container with id 2d5dd75b01b5b61d220bd8d36315891a1e665fee816290ec9354b8c6aa248808
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-891155 -n multinode-891155
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-891155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.3544549905.exe start -p running-upgrade-341136 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0811 23:41:59.473976    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.3544549905.exe start -p running-upgrade-341136 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.100123374s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-341136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-341136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.927077657s)

                                                
                                                
-- stdout --
	* [running-upgrade-341136] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-341136 in cluster running-upgrade-341136
	* Pulling base image ...
	* Updating the running docker "running-upgrade-341136" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:42:22.657568  131491 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:42:22.657691  131491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:42:22.657700  131491 out.go:309] Setting ErrFile to fd 2...
	I0811 23:42:22.657706  131491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:42:22.657959  131491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:42:22.658353  131491 out.go:303] Setting JSON to false
	I0811 23:42:22.659440  131491 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5091,"bootTime":1691792252,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:42:22.659544  131491 start.go:138] virtualization:  
	I0811 23:42:22.661999  131491 out.go:177] * [running-upgrade-341136] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:42:22.664122  131491 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:42:22.665705  131491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:42:22.664214  131491 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0811 23:42:22.664253  131491 notify.go:220] Checking for updates...
	I0811 23:42:22.669908  131491 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:42:22.671704  131491 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:42:22.673622  131491 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:42:22.675653  131491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:42:22.678001  131491 config.go:182] Loaded profile config "running-upgrade-341136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:42:22.680258  131491 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0811 23:42:22.682245  131491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:42:22.727387  131491 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:42:22.727487  131491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:42:22.849520  131491 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0811 23:42:22.860106  131491 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-11 23:42:22.850028916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:42:22.860215  131491 docker.go:294] overlay module found
	I0811 23:42:22.863258  131491 out.go:177] * Using the docker driver based on existing profile
	I0811 23:42:22.865067  131491 start.go:298] selected driver: docker
	I0811 23:42:22.865121  131491 start.go:901] validating driver "docker" against &{Name:running-upgrade-341136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-341136 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.145 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:42:22.865233  131491 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:42:22.865892  131491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:42:22.936140  131491 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-11 23:42:22.92523204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:42:22.936448  131491 cni.go:84] Creating CNI manager for ""
	I0811 23:42:22.936466  131491 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:42:22.936479  131491 start_flags.go:319] config:
	{Name:running-upgrade-341136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-341136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.145 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:42:22.939402  131491 out.go:177] * Starting control plane node running-upgrade-341136 in cluster running-upgrade-341136
	I0811 23:42:22.941189  131491 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:42:22.942996  131491 out.go:177] * Pulling base image ...
	I0811 23:42:22.944592  131491 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0811 23:42:22.944611  131491 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0811 23:42:22.964225  131491 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0811 23:42:22.964253  131491 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0811 23:42:23.019436  131491 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0811 23:42:23.019593  131491 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/running-upgrade-341136/config.json ...
	I0811 23:42:23.019666  131491 cache.go:107] acquiring lock: {Name:mk7a5741f7b4e0b160bbe01f7fd094635998893d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019758  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0811 23:42:23.019768  131491 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.038µs
	I0811 23:42:23.019777  131491 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0811 23:42:23.019785  131491 cache.go:107] acquiring lock: {Name:mk94b66842cf8433baba636877bda45bc30090e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019819  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0811 23:42:23.019824  131491 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 41.28µs
	I0811 23:42:23.019832  131491 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0811 23:42:23.019839  131491 cache.go:107] acquiring lock: {Name:mk0c08839063be20f2e0c15aa4b7b2ce91a4b35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019849  131491 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:42:23.019865  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0811 23:42:23.019870  131491 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.525µs
	I0811 23:42:23.019877  131491 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0811 23:42:23.019884  131491 cache.go:107] acquiring lock: {Name:mk7926f0ea7b917ece46b318e6b71004c7867d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019905  131491 cache.go:107] acquiring lock: {Name:mk70992b6ec3b9f61b1ba2b19474481f643a50d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019954  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0811 23:42:23.019972  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0811 23:42:23.019970  131491 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 64.444µs
	I0811 23:42:23.019981  131491 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0811 23:42:23.019980  131491 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 97.798µs
	I0811 23:42:23.019988  131491 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0811 23:42:23.019989  131491 cache.go:107] acquiring lock: {Name:mkaea833af7e0d47e53f03de55b4fcd709cc7efc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019996  131491 cache.go:107] acquiring lock: {Name:mk6e8ba24db6814fb692d5c252326ec03b67fcc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.020027  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0811 23:42:23.020032  131491 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 44.275µs
	I0811 23:42:23.020038  131491 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0811 23:42:23.020049  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0811 23:42:23.020048  131491 cache.go:107] acquiring lock: {Name:mk73f2a48d6979c6af785fa1481e1aebfce32d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.019885  131491 start.go:365] acquiring machines lock for running-upgrade-341136: {Name:mke8dd9e324822741f7005cfc52a576e853d534e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:42:23.020054  131491 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 59.652µs
	I0811 23:42:23.020078  131491 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0811 23:42:23.020098  131491 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0811 23:42:23.020108  131491 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 60.718µs
	I0811 23:42:23.020112  131491 start.go:369] acquired machines lock for "running-upgrade-341136" in 47.804µs
	I0811 23:42:23.020114  131491 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0811 23:42:23.020120  131491 cache.go:87] Successfully saved all images to host disk.
	I0811 23:42:23.020147  131491 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:42:23.020160  131491 fix.go:54] fixHost starting: 
	I0811 23:42:23.020441  131491 cli_runner.go:164] Run: docker container inspect running-upgrade-341136 --format={{.State.Status}}
	I0811 23:42:23.038735  131491 fix.go:102] recreateIfNeeded on running-upgrade-341136: state=Running err=<nil>
	W0811 23:42:23.038762  131491 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:42:23.040664  131491 out.go:177] * Updating the running docker "running-upgrade-341136" container ...
	I0811 23:42:23.042472  131491 machine.go:88] provisioning docker machine ...
	I0811 23:42:23.042498  131491 ubuntu.go:169] provisioning hostname "running-upgrade-341136"
	I0811 23:42:23.042582  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:23.060952  131491 main.go:141] libmachine: Using SSH client type: native
	I0811 23:42:23.061477  131491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I0811 23:42:23.061496  131491 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-341136 && echo "running-upgrade-341136" | sudo tee /etc/hostname
	I0811 23:42:23.216964  131491 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-341136
	
	I0811 23:42:23.217055  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:23.242967  131491 main.go:141] libmachine: Using SSH client type: native
	I0811 23:42:23.243516  131491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I0811 23:42:23.243543  131491 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-341136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-341136/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-341136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:42:23.393207  131491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:42:23.393232  131491 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:42:23.393268  131491 ubuntu.go:177] setting up certificates
	I0811 23:42:23.393280  131491 provision.go:83] configureAuth start
	I0811 23:42:23.393352  131491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-341136
	I0811 23:42:23.412463  131491 provision.go:138] copyHostCerts
	I0811 23:42:23.412536  131491 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:42:23.412548  131491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:42:23.412628  131491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:42:23.412778  131491 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:42:23.412788  131491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:42:23.412817  131491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:42:23.412885  131491 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:42:23.412894  131491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:42:23.412919  131491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:42:23.412975  131491 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-341136 san=[192.168.70.145 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-341136]
	I0811 23:42:24.186542  131491 provision.go:172] copyRemoteCerts
	I0811 23:42:24.186624  131491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:42:24.186671  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:24.206201  131491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/running-upgrade-341136/id_rsa Username:docker}
	I0811 23:42:24.311106  131491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0811 23:42:24.381750  131491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:42:24.429478  131491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:42:24.463179  131491 provision.go:86] duration metric: configureAuth took 1.069882475s
	I0811 23:42:24.463268  131491 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:42:24.463508  131491 config.go:182] Loaded profile config "running-upgrade-341136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:42:24.463661  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:24.489094  131491 main.go:141] libmachine: Using SSH client type: native
	I0811 23:42:24.489545  131491 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I0811 23:42:24.489569  131491 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:42:25.060299  131491 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:42:25.060319  131491 machine.go:91] provisioned docker machine in 2.017830809s
	I0811 23:42:25.060330  131491 start.go:300] post-start starting for "running-upgrade-341136" (driver="docker")
	I0811 23:42:25.060340  131491 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:42:25.060411  131491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:42:25.060455  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:25.082124  131491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/running-upgrade-341136/id_rsa Username:docker}
	I0811 23:42:25.187937  131491 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:42:25.192083  131491 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:42:25.192110  131491 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:42:25.192122  131491 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:42:25.192129  131491 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0811 23:42:25.192138  131491 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:42:25.192195  131491 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:42:25.192317  131491 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:42:25.192419  131491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:42:25.202339  131491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:42:25.235878  131491 start.go:303] post-start completed in 175.531623ms
	I0811 23:42:25.235973  131491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:42:25.236012  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:25.255204  131491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/running-upgrade-341136/id_rsa Username:docker}
	I0811 23:42:25.354022  131491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:42:25.361428  131491 fix.go:56] fixHost completed within 2.341229648s
	I0811 23:42:25.361469  131491 start.go:83] releasing machines lock for "running-upgrade-341136", held for 2.341347416s
	I0811 23:42:25.361539  131491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-341136
	I0811 23:42:25.381148  131491 ssh_runner.go:195] Run: cat /version.json
	I0811 23:42:25.381203  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:25.381288  131491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:42:25.381482  131491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-341136
	I0811 23:42:25.426138  131491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/running-upgrade-341136/id_rsa Username:docker}
	I0811 23:42:25.434097  131491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/running-upgrade-341136/id_rsa Username:docker}
	W0811 23:42:25.659615  131491 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0811 23:42:25.659779  131491 ssh_runner.go:195] Run: systemctl --version
	I0811 23:42:25.666195  131491 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:42:25.806948  131491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:42:25.819319  131491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:42:25.844879  131491 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:42:25.844986  131491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:42:25.882971  131491 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:42:25.882994  131491 start.go:466] detecting cgroup driver to use...
	I0811 23:42:25.883056  131491 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:42:25.883146  131491 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:42:25.913788  131491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:42:25.927657  131491 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:42:25.927755  131491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:42:25.941052  131491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:42:25.960194  131491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0811 23:42:25.974052  131491 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0811 23:42:25.974143  131491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:42:26.128920  131491 docker.go:212] disabling docker service ...
	I0811 23:42:26.128983  131491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:42:26.143002  131491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:42:26.161942  131491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:42:26.324794  131491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:42:26.464373  131491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:42:26.478225  131491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:42:26.498534  131491 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0811 23:42:26.498623  131491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:42:26.517165  131491 out.go:177] 
	W0811 23:42:26.518815  131491 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0811 23:42:26.518841  131491 out.go:239] * 
	* 
	W0811 23:42:26.519837  131491 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 23:42:26.521606  131491 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-341136 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-11 23:42:26.551834015 +0000 UTC m=+2505.464194555
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-341136
helpers_test.go:235: (dbg) docker inspect running-upgrade-341136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172",
	        "Created": "2023-08-11T23:41:35.182275658Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:41:35.604381525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172/hostname",
	        "HostsPath": "/var/lib/docker/containers/87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172/hosts",
	        "LogPath": "/var/lib/docker/containers/87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172/87bcf727a0ca73e34e683121c888b1d0326aaff2a74299d82bd1d9d9a6faf172-json.log",
	        "Name": "/running-upgrade-341136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-341136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-341136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d7e0e511ba99702b3ba5f23a069dee3db01471f12fd237137f8fadba2b4f71ab-init/diff:/var/lib/docker/overlay2/f6b5f4a1984dd6b2e736b812227c36ae23973d0cbcb0f607a1f2a316015bc04e/diff:/var/lib/docker/overlay2/1c67440e0c27d857fbacbe8726e57cc4e185293ef97089d76c8914039cf308bb/diff:/var/lib/docker/overlay2/c2d2cbd188b682d0deb79609216d18f4870be3d3643e2e751ad25ae9fc5deae8/diff:/var/lib/docker/overlay2/0b116b585e7a1233d5dfb4117c4b64b81a39ab76810fadcaafa07507c4e88499/diff:/var/lib/docker/overlay2/5ab54810269fb8bf7d760767f56d80d88723c5e3e381a622bf77a6a3a7785240/diff:/var/lib/docker/overlay2/e089e01a71c21c17fbe3466b9c68478c59eb61585f8c3d9d0ff699f185c50494/diff:/var/lib/docker/overlay2/faa3c301825cc6d96bc681af60d302954c04994a0b56f7aa4d9e2bb69e49f596/diff:/var/lib/docker/overlay2/70abf9f9a37c3eb9cdb0acfea07d784d97e60111b9e59ab6765907762bd50fcd/diff:/var/lib/docker/overlay2/95864a2dc9c10c1228b5bbb5a8e75b84c3dffb7af92315d99f24d8bd98ab847a/diff:/var/lib/docker/overlay2/e02f38
a761c051fa89fc7265c959f809cf433c946d8c1b587a1fe7119388ef51/diff:/var/lib/docker/overlay2/06345b92b20bb93d3a7a00ecce2c8b0b1d9c6f3e2afffa166da3b696a65a711d/diff:/var/lib/docker/overlay2/b759d7f369ba0931d6d30936ccfd7ff1d3e383e4953bbec902bd42d08d91fd8a/diff:/var/lib/docker/overlay2/861c2e06c3a6d666274a4c5638640f4ed6d943f4d6f57e6bef55fdc97cc106d7/diff:/var/lib/docker/overlay2/56832fed6164a3f29a824ebab0142ada37f2b9ff643c468fda96e57ba2fb0cf3/diff:/var/lib/docker/overlay2/e6f4cf3f44c5355f75911350d0e8f09385edc277a82b260f47bc5aa1420b69c0/diff:/var/lib/docker/overlay2/9d0e371c22717ff879901c792272b11efbfc9e2b9e84b610b427d3b991a64afc/diff:/var/lib/docker/overlay2/2885f069a5b7856a77dd1cd13b97a5bf60e76e35219e99fac20f9ca17ee78fec/diff:/var/lib/docker/overlay2/24fda60df17c39086911731474343f5a14b7e5641a7863f23f0e54dcd9248dd1/diff:/var/lib/docker/overlay2/1214b1478e3270fa38cd4c65fb992ee4d4d22518d5ec2958efc5063ecd9cb6a0/diff:/var/lib/docker/overlay2/c0b48049c816ae324d24d6e50344b7dc395059195b6c038f87af49295c958bf1/diff:/var/lib/d
ocker/overlay2/274944d6e6d135c9dd829e7949ac5f215b5862e47b3aa456fe5205a36ea590d6/diff:/var/lib/docker/overlay2/29b325aaeda230f2cf356035def829ecf66880ce40d3a5ab865397827716084f/diff:/var/lib/docker/overlay2/2204153196cf450ffef2b5ffe9dadbc15098d689a3ee8c4768c896fea1524a30/diff:/var/lib/docker/overlay2/0af3a7bc58af17666663c9f6269a9fd2e2ef57f483c8f3c322fd58d60a0c2acc/diff:/var/lib/docker/overlay2/50ee0aa52c1101f265d058ac7c6cb9d76b7406bece3ff786bbeb61376389044a/diff:/var/lib/docker/overlay2/03c6931c4166eee33f25c3564a0fac4cbf9c80deac372a6f15eb0d67937bb81a/diff:/var/lib/docker/overlay2/3e0cd1e4bff2213391a0619326df80464f712f64583e66e8dc626b8f30b7979d/diff:/var/lib/docker/overlay2/6526cb5b368ea4fd77ed9d3c4532f472aa977598f63bcc4563dab85be67b6807/diff:/var/lib/docker/overlay2/072d4667f6e487bd0fc874b21277688f7a5c2fb35a987bf12e2dcca17776da05/diff:/var/lib/docker/overlay2/b0a58a45636499880957f3177eb1905822852ccd2c8c89244392ee0655a8fc88/diff:/var/lib/docker/overlay2/44dd6bc233c32eded2fdc517e836adccf206ccdd49bae5502f14549f06b
6d473/diff:/var/lib/docker/overlay2/cee2a718ef5182ebff21069cc10c6a748588ec5dff3ce443a048a0fc55e556df/diff:/var/lib/docker/overlay2/f5a09841789d7fdafa1e1837c654dbc6ed940869d0d03d38a78d906b52e19e8e/diff:/var/lib/docker/overlay2/dfe4abb91d08d9b6e02ec3d418a1262e966d84b16c01b7f45c3f531333f58096/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7e0e511ba99702b3ba5f23a069dee3db01471f12fd237137f8fadba2b4f71ab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7e0e511ba99702b3ba5f23a069dee3db01471f12fd237137f8fadba2b4f71ab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7e0e511ba99702b3ba5f23a069dee3db01471f12fd237137f8fadba2b4f71ab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-341136",
	                "Source": "/var/lib/docker/volumes/running-upgrade-341136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-341136",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-341136",
	                "name.minikube.sigs.k8s.io": "running-upgrade-341136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "508a9235f52390c351effadfbf5d4408f8dbbde75a905a614491a33398b27c5d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32955"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/508a9235f523",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-341136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.145"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "87bcf727a0ca",
	                        "running-upgrade-341136"
	                    ],
	                    "NetworkID": "179fd4f474465da6bff48dc85285707aa49ef6eab7760b390433d661301126fe",
	                    "EndpointID": "4d0466a74ad9556559a2e0a2da40b073a4aba05faf0e5edbdc4e77f153449e6e",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.145",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:91",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-341136 -n running-upgrade-341136
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-341136 -n running-upgrade-341136: exit status 4 (495.781655ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 23:42:26.999319  132077 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-341136" does not appear in /home/jenkins/minikube-integration/17044-2333/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-341136" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-341136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-341136
E0811 23:42:27.849983    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-341136: (2.955140429s)
--- FAIL: TestRunningBinaryUpgrade (69.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.53s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.1176753116.exe start -p missing-upgrade-550468 --memory=2200 --driver=docker  --container-runtime=crio
E0811 23:36:59.474567    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.1176753116.exe start -p missing-upgrade-550468 --memory=2200 --driver=docker  --container-runtime=crio: (2m15.304550112s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-550468
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-550468: (2.029230147s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-550468
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-550468 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-550468 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (44.36625361s)

                                                
                                                
-- stdout --
	* [missing-upgrade-550468] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-550468 in cluster missing-upgrade-550468
	* Pulling base image ...
	* docker "missing-upgrade-550468" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:39:06.839323  118130 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:39:06.839543  118130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:39:06.839569  118130 out.go:309] Setting ErrFile to fd 2...
	I0811 23:39:06.839587  118130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:39:06.839875  118130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:39:06.840297  118130 out.go:303] Setting JSON to false
	I0811 23:39:06.841426  118130 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4895,"bootTime":1691792252,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:39:06.841528  118130 start.go:138] virtualization:  
	I0811 23:39:06.845033  118130 out.go:177] * [missing-upgrade-550468] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:39:06.847121  118130 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:39:06.847202  118130 notify.go:220] Checking for updates...
	I0811 23:39:06.849775  118130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:39:06.851878  118130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:39:06.853375  118130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:39:06.854927  118130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:39:06.856634  118130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:39:06.862397  118130 config.go:182] Loaded profile config "missing-upgrade-550468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:39:06.865217  118130 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0811 23:39:06.866804  118130 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:39:06.892719  118130 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:39:06.892811  118130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:39:06.990362  118130 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-11 23:39:06.979489497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:39:06.990473  118130 docker.go:294] overlay module found
	I0811 23:39:06.993777  118130 out.go:177] * Using the docker driver based on existing profile
	I0811 23:39:06.995695  118130 start.go:298] selected driver: docker
	I0811 23:39:06.995732  118130 start.go:901] validating driver "docker" against &{Name:missing-upgrade-550468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-550468 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.69 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:39:06.995857  118130 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:39:06.996521  118130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:39:07.077307  118130 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-11 23:39:07.066315213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:39:07.077591  118130 cni.go:84] Creating CNI manager for ""
	I0811 23:39:07.077603  118130 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:39:07.077613  118130 start_flags.go:319] config:
	{Name:missing-upgrade-550468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-550468 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.69 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:39:07.082647  118130 out.go:177] * Starting control plane node missing-upgrade-550468 in cluster missing-upgrade-550468
	I0811 23:39:07.084390  118130 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:39:07.086285  118130 out.go:177] * Pulling base image ...
	I0811 23:39:07.088182  118130 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0811 23:39:07.088248  118130 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0811 23:39:07.111238  118130 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0811 23:39:07.111440  118130 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0811 23:39:07.111813  118130 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0811 23:39:07.171566  118130 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0811 23:39:07.171724  118130 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/missing-upgrade-550468/config.json ...
	I0811 23:39:07.172061  118130 cache.go:107] acquiring lock: {Name:mk7a5741f7b4e0b160bbe01f7fd094635998893d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.172148  118130 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0811 23:39:07.172165  118130 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.202µs
	I0811 23:39:07.172178  118130 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0811 23:39:07.172188  118130 cache.go:107] acquiring lock: {Name:mk94b66842cf8433baba636877bda45bc30090e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.172278  118130 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0811 23:39:07.172472  118130 cache.go:107] acquiring lock: {Name:mk0c08839063be20f2e0c15aa4b7b2ce91a4b35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.172551  118130 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0811 23:39:07.172631  118130 cache.go:107] acquiring lock: {Name:mk7926f0ea7b917ece46b318e6b71004c7867d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.172716  118130 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0811 23:39:07.172801  118130 cache.go:107] acquiring lock: {Name:mkaea833af7e0d47e53f03de55b4fcd709cc7efc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.172876  118130 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0811 23:39:07.172961  118130 cache.go:107] acquiring lock: {Name:mk70992b6ec3b9f61b1ba2b19474481f643a50d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.173032  118130 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0811 23:39:07.173283  118130 cache.go:107] acquiring lock: {Name:mk6e8ba24db6814fb692d5c252326ec03b67fcc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.173411  118130 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0811 23:39:07.173544  118130 cache.go:107] acquiring lock: {Name:mk73f2a48d6979c6af785fa1481e1aebfce32d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:07.173700  118130 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0811 23:39:07.177993  118130 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0811 23:39:07.178998  118130 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0811 23:39:07.179424  118130 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0811 23:39:07.179796  118130 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0811 23:39:07.180006  118130 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0811 23:39:07.180201  118130 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0811 23:39:07.180885  118130 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0811 23:39:07.670604  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0811 23:39:07.733844  118130 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0811 23:39:07.734900  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0811 23:39:07.775869  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0811 23:39:07.780419  118130 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0811 23:39:07.780491  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0811 23:39:07.790097  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0811 23:39:07.791035  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0811 23:39:07.800583  118130 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0811 23:39:07.800649  118130 cache.go:162] opening:  /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?I0811 23:39:07.833010  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0811 23:39:07.833037  118130 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 660.077015ms
	I0811 23:39:07.833050  118130 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0811 23:39:08.163762  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0811 23:39:08.163790  118130 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 990.248352ms
	I0811 23:39:08.163802  118130 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  5.45 MiB / 287.99 MiB  1.89% 8.89 MiB p/I0811 23:39:08.445446  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0811 23:39:08.445468  118130 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.272837465s
	I0811 23:39:08.445481  118130 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  16.02 MiB / 287.99 MiB  5.56% 8.89 MiB pI0811 23:39:08.695877  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0811 23:39:08.695917  118130 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.523446941s
	I0811 23:39:08.695954  118130 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0811 23:39:08.729636  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0811 23:39:08.729795  118130 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.557604228s
	I0811 23:39:08.729826  118130 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 8.89 MiB pI0811 23:39:08.860185  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0811 23:39:08.860210  118130 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.687409968s
	I0811 23:39:08.860226  118130 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 10.56 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 10.56 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 10.56 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 9.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 9.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 9.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  28.94 MiB / 287.99 MiB  10.05% 9.56 MiB     > gcr.io/k8s-minikube/kicbase...:  33.94 MiB / 287.99 MiB  11.78% 9.56 MiB     > gcr.io/k8s-minikube/kicbase...:  43.05 MiB / 287.99 MiB  14.95% 9.56 MiB     > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 10.55 MiB    > gcr.io/k8s-minikube/kicbase...:  52.25 MiB / 287.99 MiB  18.14% 10.55 MiBI0811 23:39:11.208633  118130 cache.go:157] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13
-0 exists
	I0811 23:39:11.208659  118130 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.035381231s
	I0811 23:39:11.208672  118130 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0811 23:39:11.208687  118130 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 10.55 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 12.44 MiB    > gcr.io/k8s-minikube/kicbase...:  75.79 MiB / 287.99 MiB  26.32% 12.44 MiB    > gcr.io/k8s-minikube/kicbase...:  83.79 MiB / 287.99 MiB  29.10% 12.44 MiB    > gcr.io/k8s-minikube/kicbase...:  95.40 MiB / 287.99 MiB  33.13% 14.61 MiB    > gcr.io/k8s-minikube/kicbase...:  107.79 MiB / 287.99 MiB  37.43% 14.61 Mi    > gcr.io/k8s-minikube/kicbase...:  115.79 MiB / 287.99 MiB  40.21% 14.61 Mi    > gcr.io/k8s-minikube/kicbase...:  123.79 MiB / 287.99 MiB  42.99% 16.72 Mi    > gcr.io/k8s-minikube/kicbase...:  136.72 MiB / 287.99 MiB  47.47% 16.72 Mi    > gcr.io/k8s-minikube/kicbase...:  151.01 MiB / 287.99 MiB  52.44% 16.72 Mi    > gcr.io/k8s-minikube/kicbase...:  165.20 MiB / 287.99 MiB  57.36% 20.10 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 20.10 Mi    > gcr.io/k8s-minikube/kicbase...:  179.73 MiB / 287.99 MiB  62.
41% 20.10 Mi    > gcr.io/k8s-minikube/kicbase...:  195.73 MiB / 287.99 MiB  67.96% 22.08 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 22.08 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 22.08 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 23.02 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 23.02 Mi    > gcr.io/k8s-minikube/kicbase...:  225.68 MiB / 287.99 MiB  78.36% 23.02 Mi    > gcr.io/k8s-minikube/kicbase...:  225.68 MiB / 287.99 MiB  78.36% 22.39 Mi    > gcr.io/k8s-minikube/kicbase...:  233.50 MiB / 287.99 MiB  81.08% 22.39 Mi    > gcr.io/k8s-minikube/kicbase...:  233.68 MiB / 287.99 MiB  81.14% 22.39 Mi    > gcr.io/k8s-minikube/kicbase...:  233.68 MiB / 287.99 MiB  81.14% 21.81 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 21.81 Mi    > gcr.io/k8s-minikube/kicbase...:  247.07 MiB / 287.99 MiB  85.79% 21.81 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB
92.03% 23.78 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 23.78 Mi    > gcr.io/k8s-minikube/kicbase...:  273.05 MiB / 287.99 MiB  94.81% 23.78 Mi    > gcr.io/k8s-minikube/kicbase...:  283.33 MiB / 287.99 MiB  98.38% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 24.21 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 23.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 23.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 23.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.65 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.65 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.65 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 20.26 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 Mi
B  99.99% 20.26 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 20.26 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 18.95 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 18.95 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 18.95 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 17.73 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 17.73 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 23.24 MI0811 23:39:20.208213  118130 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0811 23:39:20.208223  118130 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0811 23:39:21.376745  118130 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0811 23:39:21.376784  118130 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:39:21.376832  118130 start.go:365] acquiring machines lock for missing-upgrade-550468: {Name:mk3bbf124c71fb95b7ad3d1bf34e661c317191f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:39:21.376904  118130 start.go:369] acquired machines lock for "missing-upgrade-550468" in 51.89µs
	I0811 23:39:21.376924  118130 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:39:21.376929  118130 fix.go:54] fixHost starting: 
	I0811 23:39:21.377241  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:21.414532  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:21.414595  118130 fix.go:102] recreateIfNeeded on missing-upgrade-550468: state= err=unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:21.414613  118130 fix.go:107] machineExists: false. err=machine does not exist
	I0811 23:39:21.417324  118130 out.go:177] * docker "missing-upgrade-550468" container is missing, will recreate.
	I0811 23:39:21.418893  118130 delete.go:124] DEMOLISHING missing-upgrade-550468 ...
	I0811 23:39:21.418981  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:21.443732  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	W0811 23:39:21.443784  118130 stop.go:75] unable to get state: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:21.443801  118130 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:21.444249  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:21.461820  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:21.461882  118130 delete.go:82] Unable to get host status for missing-upgrade-550468, assuming it has already been deleted: state: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:21.461946  118130 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-550468
	W0811 23:39:21.487613  118130 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-550468 returned with exit code 1
	I0811 23:39:21.487644  118130 kic.go:367] could not find the container missing-upgrade-550468 to remove it. will try anyways
	I0811 23:39:21.487699  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:21.507926  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	W0811 23:39:21.507977  118130 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:21.508038  118130 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-550468 /bin/bash -c "sudo init 0"
	W0811 23:39:21.527748  118130 cli_runner.go:211] docker exec --privileged -t missing-upgrade-550468 /bin/bash -c "sudo init 0" returned with exit code 1
	I0811 23:39:21.527777  118130 oci.go:647] error shutdown missing-upgrade-550468: docker exec --privileged -t missing-upgrade-550468 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:22.527977  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:22.551739  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:22.551805  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:22.551820  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:22.551848  118130 retry.go:31] will retry after 368.002919ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:22.920454  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:22.951763  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:22.951827  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:22.951837  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:22.951870  118130 retry.go:31] will retry after 741.337424ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:23.693400  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:23.730544  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:23.730603  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:23.730618  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:23.730640  118130 retry.go:31] will retry after 826.687647ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:24.558289  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:24.577135  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:24.577191  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:24.577201  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:24.577277  118130 retry.go:31] will retry after 2.273484147s: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:26.851206  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:26.869280  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:26.869346  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:26.869355  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:26.869380  118130 retry.go:31] will retry after 3.701941069s: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:30.573192  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:30.623295  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:30.623352  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:30.623365  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:30.623387  118130 retry.go:31] will retry after 4.535195462s: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:35.161513  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:35.182795  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:35.182863  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:35.182877  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:35.182904  118130 retry.go:31] will retry after 5.726144087s: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:40.909280  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:40.925279  118130 cli_runner.go:211] docker container inspect missing-upgrade-550468 --format={{.State.Status}} returned with exit code 1
	I0811 23:39:40.925339  118130 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	I0811 23:39:40.925351  118130 oci.go:661] temporary error: container missing-upgrade-550468 status is  but expect it to be exited
	I0811 23:39:40.925390  118130 oci.go:88] couldn't shut down missing-upgrade-550468 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-550468": docker container inspect missing-upgrade-550468 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-550468
	 
	I0811 23:39:40.925450  118130 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-550468
	I0811 23:39:40.942064  118130 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-550468
	W0811 23:39:40.958878  118130 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-550468 returned with exit code 1
	I0811 23:39:40.958973  118130 cli_runner.go:164] Run: docker network inspect missing-upgrade-550468 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:39:40.976950  118130 cli_runner.go:164] Run: docker network rm missing-upgrade-550468
	I0811 23:39:41.079154  118130 fix.go:114] Sleeping 1 second for extra luck!
	I0811 23:39:42.079819  118130 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:39:42.082162  118130 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0811 23:39:42.082350  118130 start.go:159] libmachine.API.Create for "missing-upgrade-550468" (driver="docker")
	I0811 23:39:42.082389  118130 client.go:168] LocalClient.Create starting
	I0811 23:39:42.082506  118130 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:39:42.082553  118130 main.go:141] libmachine: Decoding PEM data...
	I0811 23:39:42.082574  118130 main.go:141] libmachine: Parsing certificate...
	I0811 23:39:42.082634  118130 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:39:42.082659  118130 main.go:141] libmachine: Decoding PEM data...
	I0811 23:39:42.082672  118130 main.go:141] libmachine: Parsing certificate...
	I0811 23:39:42.082976  118130 cli_runner.go:164] Run: docker network inspect missing-upgrade-550468 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:39:42.102144  118130 cli_runner.go:211] docker network inspect missing-upgrade-550468 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:39:42.102238  118130 network_create.go:281] running [docker network inspect missing-upgrade-550468] to gather additional debugging logs...
	I0811 23:39:42.102263  118130 cli_runner.go:164] Run: docker network inspect missing-upgrade-550468
	W0811 23:39:42.123193  118130 cli_runner.go:211] docker network inspect missing-upgrade-550468 returned with exit code 1
	I0811 23:39:42.123231  118130 network_create.go:284] error running [docker network inspect missing-upgrade-550468]: docker network inspect missing-upgrade-550468: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-550468 not found
	I0811 23:39:42.123247  118130 network_create.go:286] output of [docker network inspect missing-upgrade-550468]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-550468 not found
	
	** /stderr **
	I0811 23:39:42.123324  118130 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:39:42.143453  118130 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb015cdafab9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3c:25:af:38} reservation:<nil>}
	I0811 23:39:42.143823  118130 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c2f4372f433a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:1d:72:42:dd} reservation:<nil>}
	I0811 23:39:42.144185  118130 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8acd563f97fa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4d:c3:c5:b6} reservation:<nil>}
	I0811 23:39:42.144690  118130 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018cb140}
	I0811 23:39:42.144715  118130 network_create.go:123] attempt to create docker network missing-upgrade-550468 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0811 23:39:42.144781  118130 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-550468 missing-upgrade-550468
	I0811 23:39:42.228666  118130 network_create.go:107] docker network missing-upgrade-550468 192.168.76.0/24 created
	I0811 23:39:42.228700  118130 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-550468" container
	I0811 23:39:42.228793  118130 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:39:42.247998  118130 cli_runner.go:164] Run: docker volume create missing-upgrade-550468 --label name.minikube.sigs.k8s.io=missing-upgrade-550468 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:39:42.266555  118130 oci.go:103] Successfully created a docker volume missing-upgrade-550468
	I0811 23:39:42.266642  118130 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-550468-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-550468 --entrypoint /usr/bin/test -v missing-upgrade-550468:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0811 23:39:42.866704  118130 oci.go:107] Successfully prepared a docker volume missing-upgrade-550468
	I0811 23:39:42.866741  118130 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0811 23:39:42.866880  118130 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:39:42.867006  118130 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:39:42.938324  118130 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-550468 --name missing-upgrade-550468 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-550468 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-550468 --network missing-upgrade-550468 --ip 192.168.76.2 --volume missing-upgrade-550468:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0811 23:39:43.303131  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Running}}
	I0811 23:39:43.327876  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	I0811 23:39:43.352712  118130 cli_runner.go:164] Run: docker exec missing-upgrade-550468 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:39:43.419563  118130 oci.go:144] the created container "missing-upgrade-550468" has a running status.
	I0811 23:39:43.419589  118130 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa...
	I0811 23:39:43.626301  118130 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:39:43.658343  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	I0811 23:39:43.682105  118130 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:39:43.682124  118130 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-550468 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:39:43.781495  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	I0811 23:39:43.807891  118130 machine.go:88] provisioning docker machine ...
	I0811 23:39:43.807919  118130 ubuntu.go:169] provisioning hostname "missing-upgrade-550468"
	I0811 23:39:43.807981  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:43.840361  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:43.840830  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:43.840844  118130 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-550468 && echo "missing-upgrade-550468" | sudo tee /etc/hostname
	I0811 23:39:43.842495  118130 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0811 23:39:46.993648  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-550468
	
	I0811 23:39:46.993750  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:47.013531  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:47.013976  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:47.014000  118130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-550468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-550468/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-550468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:39:47.153989  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:39:47.154017  118130 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:39:47.154037  118130 ubuntu.go:177] setting up certificates
	I0811 23:39:47.154046  118130 provision.go:83] configureAuth start
	I0811 23:39:47.154104  118130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-550468
	I0811 23:39:47.172912  118130 provision.go:138] copyHostCerts
	I0811 23:39:47.172975  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:39:47.172989  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:39:47.173130  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:39:47.173238  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:39:47.173249  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:39:47.173282  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:39:47.173350  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:39:47.173358  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:39:47.173383  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:39:47.173481  118130 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-550468 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-550468]
	I0811 23:39:47.582932  118130 provision.go:172] copyRemoteCerts
	I0811 23:39:47.582999  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:39:47.583043  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:47.600586  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:47.701982  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:39:47.724057  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0811 23:39:47.747486  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:39:47.769007  118130 provision.go:86] duration metric: configureAuth took 614.947818ms
	I0811 23:39:47.769032  118130 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:39:47.769235  118130 config.go:182] Loaded profile config "missing-upgrade-550468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:39:47.769346  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:47.787228  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:47.787667  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:47.787693  118130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:39:48.213462  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:39:48.213517  118130 machine.go:91] provisioned docker machine in 4.405608132s
	I0811 23:39:48.213541  118130 client.go:171] LocalClient.Create took 6.131143251s
	I0811 23:39:48.213566  118130 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-550468" took 6.131218452s
	I0811 23:39:48.213593  118130 start.go:300] post-start starting for "missing-upgrade-550468" (driver="docker")
	I0811 23:39:48.213620  118130 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:39:48.213705  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:39:48.213768  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:48.231936  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:48.330329  118130 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:39:48.334493  118130 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:39:48.334517  118130 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:39:48.334529  118130 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:39:48.334536  118130 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0811 23:39:48.334567  118130 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:39:48.334642  118130 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:39:48.334721  118130 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:39:48.334829  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:39:48.343702  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:39:48.367090  118130 start.go:303] post-start completed in 153.466727ms
	I0811 23:39:48.367454  118130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-550468
	I0811 23:39:48.385515  118130 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/missing-upgrade-550468/config.json ...
	I0811 23:39:48.385817  118130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:39:48.385872  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:48.404044  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:48.499991  118130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:39:48.505382  118130 start.go:128] duration metric: createHost completed in 6.42550486s
	I0811 23:39:48.505480  118130 cli_runner.go:164] Run: docker container inspect missing-upgrade-550468 --format={{.State.Status}}
	W0811 23:39:48.523006  118130 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:39:48.523029  118130 machine.go:88] provisioning docker machine ...
	I0811 23:39:48.523046  118130 ubuntu.go:169] provisioning hostname "missing-upgrade-550468"
	I0811 23:39:48.523110  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:48.544238  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:48.544694  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:48.544710  118130 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-550468 && echo "missing-upgrade-550468" | sudo tee /etc/hostname
	I0811 23:39:48.693364  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-550468
	
	I0811 23:39:48.693445  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:48.711981  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:48.712420  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:48.712444  118130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-550468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-550468/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-550468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:39:48.854132  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:39:48.854209  118130 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:39:48.854248  118130 ubuntu.go:177] setting up certificates
	I0811 23:39:48.854309  118130 provision.go:83] configureAuth start
	I0811 23:39:48.854406  118130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-550468
	I0811 23:39:48.872052  118130 provision.go:138] copyHostCerts
	I0811 23:39:48.872116  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:39:48.872125  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:39:48.872207  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:39:48.872301  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:39:48.872306  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:39:48.872332  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:39:48.872382  118130 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:39:48.872386  118130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:39:48.872409  118130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:39:48.872451  118130 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-550468 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-550468]
	I0811 23:39:49.298640  118130 provision.go:172] copyRemoteCerts
	I0811 23:39:49.298707  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:39:49.298749  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:49.318885  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:49.418451  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:39:49.441000  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0811 23:39:49.464082  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:39:49.487080  118130 provision.go:86] duration metric: configureAuth took 632.740474ms
	I0811 23:39:49.487104  118130 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:39:49.487279  118130 config.go:182] Loaded profile config "missing-upgrade-550468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:39:49.487384  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:49.505604  118130 main.go:141] libmachine: Using SSH client type: native
	I0811 23:39:49.506047  118130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0811 23:39:49.506072  118130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:39:49.807129  118130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:39:49.807149  118130 machine.go:91] provisioned docker machine in 1.284112161s
	I0811 23:39:49.807159  118130 start.go:300] post-start starting for "missing-upgrade-550468" (driver="docker")
	I0811 23:39:49.807169  118130 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:39:49.807234  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:39:49.807277  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:49.825691  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:49.926542  118130 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:39:49.930552  118130 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:39:49.930581  118130 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:39:49.930611  118130 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:39:49.930620  118130 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0811 23:39:49.930634  118130 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:39:49.930706  118130 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:39:49.930786  118130 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:39:49.930891  118130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:39:49.939837  118130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:39:49.962894  118130 start.go:303] post-start completed in 155.720081ms
	I0811 23:39:49.962976  118130 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:39:49.963021  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:49.981511  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:50.083725  118130 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:39:50.089879  118130 fix.go:56] fixHost completed within 28.712941778s
	I0811 23:39:50.089906  118130 start.go:83] releasing machines lock for "missing-upgrade-550468", held for 28.712994308s
	I0811 23:39:50.089987  118130 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-550468
	I0811 23:39:50.109446  118130 ssh_runner.go:195] Run: cat /version.json
	I0811 23:39:50.109473  118130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:39:50.109503  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:50.109535  118130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-550468
	I0811 23:39:50.130707  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	I0811 23:39:50.133279  118130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/missing-upgrade-550468/id_rsa Username:docker}
	W0811 23:39:50.229943  118130 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0811 23:39:50.230026  118130 ssh_runner.go:195] Run: systemctl --version
	I0811 23:39:50.368908  118130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:39:50.471323  118130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:39:50.477144  118130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:39:50.505454  118130 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:39:50.505558  118130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:39:50.546611  118130 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:39:50.546635  118130 start.go:466] detecting cgroup driver to use...
	I0811 23:39:50.546694  118130 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:39:50.546775  118130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:39:50.577240  118130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:39:50.590347  118130 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:39:50.590454  118130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:39:50.603609  118130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:39:50.616933  118130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0811 23:39:50.630835  118130 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0811 23:39:50.630948  118130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:39:50.767567  118130 docker.go:212] disabling docker service ...
	I0811 23:39:50.767684  118130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:39:50.782255  118130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:39:50.796112  118130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:39:50.929708  118130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:39:51.062805  118130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:39:51.077751  118130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:39:51.101204  118130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0811 23:39:51.101325  118130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:39:51.116011  118130 out.go:177] 
	W0811 23:39:51.117773  118130 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0811 23:39:51.117969  118130 out.go:239] * 
	* 
	W0811 23:39:51.118933  118130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 23:39:51.120293  118130 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-550468 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-08-11 23:39:51.167474514 +0000 UTC m=+2350.079835055
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-550468
helpers_test.go:235: (dbg) docker inspect missing-upgrade-550468:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16",
	        "Created": "2023-08-11T23:39:42.955148572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120140,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:39:43.290099776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16/hosts",
	        "LogPath": "/var/lib/docker/containers/3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16/3f287e80141dfd0fdabee1d1716386c82854645f1ac6a87b8d59ffdbdf51aa16-json.log",
	        "Name": "/missing-upgrade-550468",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-550468:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-550468",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dac9cf63c284be7a3b47d40049bdf46c85a97abc4c6abb12ba0224af4abc2f7e-init/diff:/var/lib/docker/overlay2/f6b5f4a1984dd6b2e736b812227c36ae23973d0cbcb0f607a1f2a316015bc04e/diff:/var/lib/docker/overlay2/1c67440e0c27d857fbacbe8726e57cc4e185293ef97089d76c8914039cf308bb/diff:/var/lib/docker/overlay2/c2d2cbd188b682d0deb79609216d18f4870be3d3643e2e751ad25ae9fc5deae8/diff:/var/lib/docker/overlay2/0b116b585e7a1233d5dfb4117c4b64b81a39ab76810fadcaafa07507c4e88499/diff:/var/lib/docker/overlay2/5ab54810269fb8bf7d760767f56d80d88723c5e3e381a622bf77a6a3a7785240/diff:/var/lib/docker/overlay2/e089e01a71c21c17fbe3466b9c68478c59eb61585f8c3d9d0ff699f185c50494/diff:/var/lib/docker/overlay2/faa3c301825cc6d96bc681af60d302954c04994a0b56f7aa4d9e2bb69e49f596/diff:/var/lib/docker/overlay2/70abf9f9a37c3eb9cdb0acfea07d784d97e60111b9e59ab6765907762bd50fcd/diff:/var/lib/docker/overlay2/95864a2dc9c10c1228b5bbb5a8e75b84c3dffb7af92315d99f24d8bd98ab847a/diff:/var/lib/docker/overlay2/e02f38
a761c051fa89fc7265c959f809cf433c946d8c1b587a1fe7119388ef51/diff:/var/lib/docker/overlay2/06345b92b20bb93d3a7a00ecce2c8b0b1d9c6f3e2afffa166da3b696a65a711d/diff:/var/lib/docker/overlay2/b759d7f369ba0931d6d30936ccfd7ff1d3e383e4953bbec902bd42d08d91fd8a/diff:/var/lib/docker/overlay2/861c2e06c3a6d666274a4c5638640f4ed6d943f4d6f57e6bef55fdc97cc106d7/diff:/var/lib/docker/overlay2/56832fed6164a3f29a824ebab0142ada37f2b9ff643c468fda96e57ba2fb0cf3/diff:/var/lib/docker/overlay2/e6f4cf3f44c5355f75911350d0e8f09385edc277a82b260f47bc5aa1420b69c0/diff:/var/lib/docker/overlay2/9d0e371c22717ff879901c792272b11efbfc9e2b9e84b610b427d3b991a64afc/diff:/var/lib/docker/overlay2/2885f069a5b7856a77dd1cd13b97a5bf60e76e35219e99fac20f9ca17ee78fec/diff:/var/lib/docker/overlay2/24fda60df17c39086911731474343f5a14b7e5641a7863f23f0e54dcd9248dd1/diff:/var/lib/docker/overlay2/1214b1478e3270fa38cd4c65fb992ee4d4d22518d5ec2958efc5063ecd9cb6a0/diff:/var/lib/docker/overlay2/c0b48049c816ae324d24d6e50344b7dc395059195b6c038f87af49295c958bf1/diff:/var/lib/d
ocker/overlay2/274944d6e6d135c9dd829e7949ac5f215b5862e47b3aa456fe5205a36ea590d6/diff:/var/lib/docker/overlay2/29b325aaeda230f2cf356035def829ecf66880ce40d3a5ab865397827716084f/diff:/var/lib/docker/overlay2/2204153196cf450ffef2b5ffe9dadbc15098d689a3ee8c4768c896fea1524a30/diff:/var/lib/docker/overlay2/0af3a7bc58af17666663c9f6269a9fd2e2ef57f483c8f3c322fd58d60a0c2acc/diff:/var/lib/docker/overlay2/50ee0aa52c1101f265d058ac7c6cb9d76b7406bece3ff786bbeb61376389044a/diff:/var/lib/docker/overlay2/03c6931c4166eee33f25c3564a0fac4cbf9c80deac372a6f15eb0d67937bb81a/diff:/var/lib/docker/overlay2/3e0cd1e4bff2213391a0619326df80464f712f64583e66e8dc626b8f30b7979d/diff:/var/lib/docker/overlay2/6526cb5b368ea4fd77ed9d3c4532f472aa977598f63bcc4563dab85be67b6807/diff:/var/lib/docker/overlay2/072d4667f6e487bd0fc874b21277688f7a5c2fb35a987bf12e2dcca17776da05/diff:/var/lib/docker/overlay2/b0a58a45636499880957f3177eb1905822852ccd2c8c89244392ee0655a8fc88/diff:/var/lib/docker/overlay2/44dd6bc233c32eded2fdc517e836adccf206ccdd49bae5502f14549f06b
6d473/diff:/var/lib/docker/overlay2/cee2a718ef5182ebff21069cc10c6a748588ec5dff3ce443a048a0fc55e556df/diff:/var/lib/docker/overlay2/f5a09841789d7fdafa1e1837c654dbc6ed940869d0d03d38a78d906b52e19e8e/diff:/var/lib/docker/overlay2/dfe4abb91d08d9b6e02ec3d418a1262e966d84b16c01b7f45c3f531333f58096/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dac9cf63c284be7a3b47d40049bdf46c85a97abc4c6abb12ba0224af4abc2f7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dac9cf63c284be7a3b47d40049bdf46c85a97abc4c6abb12ba0224af4abc2f7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dac9cf63c284be7a3b47d40049bdf46c85a97abc4c6abb12ba0224af4abc2f7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-550468",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-550468/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-550468",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-550468",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-550468",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a0378dfaf5df821730540e01ccf4d666dacb80c979ddc34c6643627c184c43d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2a0378dfaf5d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-550468": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3f287e80141d",
	                        "missing-upgrade-550468"
	                    ],
	                    "NetworkID": "9bf9095eee1c3f515dc9d8cdebef611efa349539921ba7b6915b9439b8f80abb",
	                    "EndpointID": "900264610f9096aeefec76bf858e323c154413e097745bf275ce277fa9d9fe4c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-550468 -n missing-upgrade-550468
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-550468 -n missing-upgrade-550468: exit status 6 (418.462341ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 23:39:51.596184  121175 status.go:415] kubeconfig endpoint: got: 192.168.59.69:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-550468" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-550468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-550468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-550468: (1.908413155s)
--- FAIL: TestMissingContainerUpgrade (185.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (82.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3839454342.exe start -p stopped-upgrade-773979 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3839454342.exe start -p stopped-upgrade-773979 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.333922662s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3839454342.exe -p stopped-upgrade-773979 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3839454342.exe -p stopped-upgrade-773979 stop: (12.173911768s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-773979 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-773979 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.302685201s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-773979] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-773979 in cluster stopped-upgrade-773979
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-773979" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:41:11.249595  125641 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:41:11.249753  125641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:41:11.249762  125641 out.go:309] Setting ErrFile to fd 2...
	I0811 23:41:11.249767  125641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:41:11.250039  125641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:41:11.250398  125641 out.go:303] Setting JSON to false
	I0811 23:41:11.254058  125641 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5020,"bootTime":1691792252,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:41:11.254187  125641 start.go:138] virtualization:  
	I0811 23:41:11.256742  125641 out.go:177] * [stopped-upgrade-773979] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:41:11.258317  125641 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:41:11.260459  125641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:41:11.258408  125641 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0811 23:41:11.258447  125641 notify.go:220] Checking for updates...
	I0811 23:41:11.263127  125641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:41:11.264663  125641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:41:11.267160  125641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:41:11.269718  125641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:41:11.272183  125641 config.go:182] Loaded profile config "stopped-upgrade-773979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:41:11.277196  125641 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0811 23:41:11.278932  125641 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:41:11.318886  125641 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:41:11.318980  125641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:41:11.414756  125641 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0811 23:41:11.437586  125641 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:41:11.427956009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:41:11.437688  125641 docker.go:294] overlay module found
	I0811 23:41:11.441509  125641 out.go:177] * Using the docker driver based on existing profile
	I0811 23:41:11.443150  125641 start.go:298] selected driver: docker
	I0811 23:41:11.443170  125641 start.go:901] validating driver "docker" against &{Name:stopped-upgrade-773979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-773979 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.92 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:41:11.443277  125641 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:41:11.443896  125641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:41:11.514090  125641 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:41:11.504642889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:41:11.514389  125641 cni.go:84] Creating CNI manager for ""
	I0811 23:41:11.514407  125641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:41:11.514419  125641 start_flags.go:319] config:
	{Name:stopped-upgrade-773979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-773979 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.92 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:41:11.516428  125641 out.go:177] * Starting control plane node stopped-upgrade-773979 in cluster stopped-upgrade-773979
	I0811 23:41:11.517902  125641 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:41:11.519359  125641 out.go:177] * Pulling base image ...
	I0811 23:41:11.520742  125641 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0811 23:41:11.520769  125641 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0811 23:41:11.538906  125641 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0811 23:41:11.538928  125641 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0811 23:41:11.598269  125641 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0811 23:41:11.598434  125641 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/stopped-upgrade-773979/config.json ...
	I0811 23:41:11.598512  125641 cache.go:107] acquiring lock: {Name:mk7a5741f7b4e0b160bbe01f7fd094635998893d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598592  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0811 23:41:11.598602  125641 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.846µs
	I0811 23:41:11.598611  125641 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0811 23:41:11.598621  125641 cache.go:107] acquiring lock: {Name:mk94b66842cf8433baba636877bda45bc30090e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598688  125641 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:41:11.598710  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0811 23:41:11.598724  125641 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 101.227µs
	I0811 23:41:11.598723  125641 start.go:365] acquiring machines lock for stopped-upgrade-773979: {Name:mk3ce53fc1bd70a408409b418b4b018dbe5be21f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598733  125641 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0811 23:41:11.598746  125641 cache.go:107] acquiring lock: {Name:mk0c08839063be20f2e0c15aa4b7b2ce91a4b35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598759  125641 start.go:369] acquired machines lock for "stopped-upgrade-773979" in 22.318µs
	I0811 23:41:11.598772  125641 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:41:11.598777  125641 fix.go:54] fixHost starting: 
	I0811 23:41:11.598788  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0811 23:41:11.598795  125641 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 51.126µs
	I0811 23:41:11.598802  125641 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0811 23:41:11.598813  125641 cache.go:107] acquiring lock: {Name:mk7926f0ea7b917ece46b318e6b71004c7867d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598839  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0811 23:41:11.598844  125641 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.086µs
	I0811 23:41:11.598850  125641 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0811 23:41:11.598858  125641 cache.go:107] acquiring lock: {Name:mkaea833af7e0d47e53f03de55b4fcd709cc7efc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598882  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0811 23:41:11.598886  125641 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.267µs
	I0811 23:41:11.598892  125641 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0811 23:41:11.598900  125641 cache.go:107] acquiring lock: {Name:mk70992b6ec3b9f61b1ba2b19474481f643a50d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598936  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0811 23:41:11.598940  125641 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 41.469µs
	I0811 23:41:11.598946  125641 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0811 23:41:11.598955  125641 cache.go:107] acquiring lock: {Name:mk6e8ba24db6814fb692d5c252326ec03b67fcc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.598979  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0811 23:41:11.598983  125641 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.21µs
	I0811 23:41:11.598989  125641 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0811 23:41:11.598996  125641 cache.go:107] acquiring lock: {Name:mk73f2a48d6979c6af785fa1481e1aebfce32d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:41:11.599023  125641 cache.go:115] /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0811 23:41:11.599031  125641 cli_runner.go:164] Run: docker container inspect stopped-upgrade-773979 --format={{.State.Status}}
	I0811 23:41:11.599030  125641 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 31.779µs
	I0811 23:41:11.599037  125641 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0811 23:41:11.599042  125641 cache.go:87] Successfully saved all images to host disk.
	I0811 23:41:11.616832  125641 fix.go:102] recreateIfNeeded on stopped-upgrade-773979: state=Stopped err=<nil>
	W0811 23:41:11.616856  125641 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:41:11.619006  125641 out.go:177] * Restarting existing docker container for "stopped-upgrade-773979" ...
	I0811 23:41:11.620536  125641 cli_runner.go:164] Run: docker start stopped-upgrade-773979
	I0811 23:41:11.923018  125641 cli_runner.go:164] Run: docker container inspect stopped-upgrade-773979 --format={{.State.Status}}
	I0811 23:41:11.950613  125641 kic.go:426] container "stopped-upgrade-773979" state is running.
	I0811 23:41:11.951005  125641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-773979
	I0811 23:41:11.974671  125641 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/stopped-upgrade-773979/config.json ...
	I0811 23:41:11.974908  125641 machine.go:88] provisioning docker machine ...
	I0811 23:41:11.974928  125641 ubuntu.go:169] provisioning hostname "stopped-upgrade-773979"
	I0811 23:41:11.974992  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:11.995941  125641 main.go:141] libmachine: Using SSH client type: native
	I0811 23:41:11.996615  125641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0811 23:41:11.996633  125641 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-773979 && echo "stopped-upgrade-773979" | sudo tee /etc/hostname
	I0811 23:41:11.997372  125641 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0811 23:41:15.154279  125641 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-773979
	
	I0811 23:41:15.154458  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:15.174961  125641 main.go:141] libmachine: Using SSH client type: native
	I0811 23:41:15.175419  125641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0811 23:41:15.175438  125641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-773979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-773979/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-773979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:41:15.318207  125641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:41:15.318230  125641 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:41:15.318271  125641 ubuntu.go:177] setting up certificates
	I0811 23:41:15.318288  125641 provision.go:83] configureAuth start
	I0811 23:41:15.318348  125641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-773979
	I0811 23:41:15.337171  125641 provision.go:138] copyHostCerts
	I0811 23:41:15.337232  125641 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:41:15.337254  125641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:41:15.337330  125641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:41:15.337428  125641 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:41:15.337433  125641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:41:15.337458  125641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:41:15.337506  125641 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:41:15.337510  125641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:41:15.337532  125641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:41:15.337579  125641 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-773979 san=[192.168.59.92 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-773979]
	I0811 23:41:15.728242  125641 provision.go:172] copyRemoteCerts
	I0811 23:41:15.728310  125641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:41:15.728364  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:15.746774  125641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/stopped-upgrade-773979/id_rsa Username:docker}
	I0811 23:41:15.846209  125641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:41:15.869399  125641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0811 23:41:15.892687  125641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0811 23:41:15.916594  125641 provision.go:86] duration metric: configureAuth took 598.278448ms
	I0811 23:41:15.916620  125641 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:41:15.916837  125641 config.go:182] Loaded profile config "stopped-upgrade-773979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0811 23:41:15.916952  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:15.936473  125641 main.go:141] libmachine: Using SSH client type: native
	I0811 23:41:15.936976  125641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0811 23:41:15.936999  125641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:41:16.376005  125641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:41:16.376030  125641 machine.go:91] provisioned docker machine in 4.401106896s
	I0811 23:41:16.376040  125641 start.go:300] post-start starting for "stopped-upgrade-773979" (driver="docker")
	I0811 23:41:16.376077  125641 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:41:16.376163  125641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:41:16.376217  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:16.395895  125641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/stopped-upgrade-773979/id_rsa Username:docker}
	I0811 23:41:16.494632  125641 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:41:16.498793  125641 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:41:16.498864  125641 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:41:16.498890  125641 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:41:16.498905  125641 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0811 23:41:16.498915  125641 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:41:16.498976  125641 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:41:16.499062  125641 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:41:16.499165  125641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:41:16.508084  125641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:41:16.530891  125641 start.go:303] post-start completed in 154.811058ms
	I0811 23:41:16.530966  125641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:41:16.531005  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:16.549192  125641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/stopped-upgrade-773979/id_rsa Username:docker}
	I0811 23:41:16.647030  125641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:41:16.652690  125641 fix.go:56] fixHost completed within 5.053904889s
	I0811 23:41:16.652712  125641 start.go:83] releasing machines lock for "stopped-upgrade-773979", held for 5.053945078s
	I0811 23:41:16.652781  125641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-773979
	I0811 23:41:16.675529  125641 ssh_runner.go:195] Run: cat /version.json
	I0811 23:41:16.675589  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:16.675854  125641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:41:16.675922  125641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-773979
	I0811 23:41:16.697417  125641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/stopped-upgrade-773979/id_rsa Username:docker}
	I0811 23:41:16.699197  125641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/stopped-upgrade-773979/id_rsa Username:docker}
	W0811 23:41:16.794494  125641 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0811 23:41:16.794580  125641 ssh_runner.go:195] Run: systemctl --version
	I0811 23:41:16.865862  125641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:41:16.975649  125641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:41:16.981353  125641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:41:17.007387  125641 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:41:17.007498  125641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:41:17.039711  125641 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0811 23:41:17.039735  125641 start.go:466] detecting cgroup driver to use...
	I0811 23:41:17.039786  125641 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:41:17.039861  125641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:41:17.067109  125641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:41:17.078850  125641 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:41:17.078912  125641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:41:17.090650  125641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:41:17.102188  125641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0811 23:41:17.114530  125641 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0811 23:41:17.114614  125641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:41:17.219896  125641 docker.go:212] disabling docker service ...
	I0811 23:41:17.219997  125641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:41:17.233531  125641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:41:17.245751  125641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:41:17.347570  125641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:41:17.449071  125641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:41:17.461259  125641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:41:17.479755  125641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0811 23:41:17.479820  125641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:41:17.493928  125641 out.go:177] 
	W0811 23:41:17.495747  125641 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0811 23:41:17.495770  125641 out.go:239] * 
	* 
	W0811 23:41:17.496742  125641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0811 23:41:17.498937  125641 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-773979 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (82.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-634825 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-634825 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.919078186s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-634825] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-634825 in cluster pause-634825
	* Pulling base image ...
	* Updating the running docker "pause-634825" container ...
	* Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-634825" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:43:49.577002  137978 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:43:49.577314  137978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:43:49.577344  137978 out.go:309] Setting ErrFile to fd 2...
	I0811 23:43:49.577364  137978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:43:49.577683  137978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:43:49.578097  137978 out.go:303] Setting JSON to false
	I0811 23:43:49.579350  137978 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5178,"bootTime":1691792252,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:43:49.579445  137978 start.go:138] virtualization:  
	I0811 23:43:49.583210  137978 out.go:177] * [pause-634825] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:43:49.585527  137978 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:43:49.585615  137978 notify.go:220] Checking for updates...
	I0811 23:43:49.588064  137978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:43:49.590659  137978 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:43:49.592700  137978 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:43:49.598472  137978 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:43:49.600413  137978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:43:49.603202  137978 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:43:49.603825  137978 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:43:49.631457  137978 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:43:49.631573  137978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:43:49.772711  137978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-11 23:43:49.760822326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:43:49.772812  137978 docker.go:294] overlay module found
	I0811 23:43:49.775053  137978 out.go:177] * Using the docker driver based on existing profile
	I0811 23:43:49.776926  137978 start.go:298] selected driver: docker
	I0811 23:43:49.776949  137978 start.go:901] validating driver "docker" against &{Name:pause-634825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-634825 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:43:49.777078  137978 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:43:49.777264  137978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:43:49.902444  137978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-11 23:43:49.890787685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:43:49.902826  137978 cni.go:84] Creating CNI manager for ""
	I0811 23:43:49.902843  137978 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:43:49.902854  137978 start_flags.go:319] config:
	{Name:pause-634825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-634825 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:43:49.906484  137978 out.go:177] * Starting control plane node pause-634825 in cluster pause-634825
	I0811 23:43:49.908162  137978 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:43:49.910050  137978 out.go:177] * Pulling base image ...
	I0811 23:43:49.912037  137978 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:43:49.912093  137978 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:43:49.912108  137978 cache.go:57] Caching tarball of preloaded images
	I0811 23:43:49.912184  137978 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:43:49.912198  137978 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:43:49.912374  137978 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/config.json ...
	I0811 23:43:49.912596  137978 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:43:49.946702  137978 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:43:49.946731  137978 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:43:49.946753  137978 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:43:49.946801  137978 start.go:365] acquiring machines lock for pause-634825: {Name:mke54f109f0a709a144da1d78f3eaa61973d1b36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:43:49.946877  137978 start.go:369] acquired machines lock for "pause-634825" in 50.527µs
	I0811 23:43:49.946900  137978 start.go:96] Skipping create...Using existing machine configuration
	I0811 23:43:49.946908  137978 fix.go:54] fixHost starting: 
	I0811 23:43:49.947195  137978 cli_runner.go:164] Run: docker container inspect pause-634825 --format={{.State.Status}}
	I0811 23:43:49.984388  137978 fix.go:102] recreateIfNeeded on pause-634825: state=Running err=<nil>
	W0811 23:43:49.984417  137978 fix.go:128] unexpected machine state, will restart: <nil>
	I0811 23:43:49.986929  137978 out.go:177] * Updating the running docker "pause-634825" container ...
	I0811 23:43:49.988544  137978 machine.go:88] provisioning docker machine ...
	I0811 23:43:49.988576  137978 ubuntu.go:169] provisioning hostname "pause-634825"
	I0811 23:43:49.988650  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:50.024512  137978 main.go:141] libmachine: Using SSH client type: native
	I0811 23:43:50.024994  137978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0811 23:43:50.025015  137978 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-634825 && echo "pause-634825" | sudo tee /etc/hostname
	I0811 23:43:50.204316  137978 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-634825
	
	I0811 23:43:50.204398  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:50.241696  137978 main.go:141] libmachine: Using SSH client type: native
	I0811 23:43:50.242121  137978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0811 23:43:50.242141  137978 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-634825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-634825/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-634825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:43:50.412801  137978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:43:50.412870  137978 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:43:50.412917  137978 ubuntu.go:177] setting up certificates
	I0811 23:43:50.412957  137978 provision.go:83] configureAuth start
	I0811 23:43:50.413062  137978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634825
	I0811 23:43:50.442327  137978 provision.go:138] copyHostCerts
	I0811 23:43:50.442386  137978 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:43:50.442406  137978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:43:50.442479  137978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:43:50.442577  137978 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:43:50.442583  137978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:43:50.442609  137978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:43:50.442666  137978 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:43:50.442670  137978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:43:50.442693  137978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:43:50.443579  137978 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.pause-634825 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-634825]
	I0811 23:43:51.148736  137978 provision.go:172] copyRemoteCerts
	I0811 23:43:51.148829  137978 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:43:51.148893  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:51.178533  137978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/pause-634825/id_rsa Username:docker}
	I0811 23:43:51.298308  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0811 23:43:51.338529  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:43:51.380845  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:43:51.411197  137978 provision.go:86] duration metric: configureAuth took 998.209801ms
	I0811 23:43:51.411223  137978 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:43:51.411508  137978 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:43:51.411654  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:51.436980  137978 main.go:141] libmachine: Using SSH client type: native
	I0811 23:43:51.437449  137978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0811 23:43:51.437473  137978 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:43:56.950944  137978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:43:56.950968  137978 machine.go:91] provisioned docker machine in 6.962402786s
	I0811 23:43:56.950979  137978 start.go:300] post-start starting for "pause-634825" (driver="docker")
	I0811 23:43:56.950989  137978 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:43:56.951051  137978 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:43:56.951105  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:56.988184  137978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/pause-634825/id_rsa Username:docker}
	I0811 23:43:57.097560  137978 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:43:57.103726  137978 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:43:57.103766  137978 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:43:57.103779  137978 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:43:57.103786  137978 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:43:57.103796  137978 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:43:57.103855  137978 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:43:57.103940  137978 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:43:57.104052  137978 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:43:57.117440  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:43:57.155159  137978 start.go:303] post-start completed in 204.164287ms
	I0811 23:43:57.155242  137978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:43:57.155294  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:57.183471  137978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/pause-634825/id_rsa Username:docker}
	I0811 23:43:57.288068  137978 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:43:57.295426  137978 fix.go:56] fixHost completed within 7.348502035s
	I0811 23:43:57.295451  137978 start.go:83] releasing machines lock for "pause-634825", held for 7.348562072s
	I0811 23:43:57.295517  137978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634825
	I0811 23:43:57.315405  137978 ssh_runner.go:195] Run: cat /version.json
	I0811 23:43:57.315464  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:57.315699  137978 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:43:57.315737  137978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634825
	I0811 23:43:57.362942  137978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/pause-634825/id_rsa Username:docker}
	I0811 23:43:57.373371  137978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/pause-634825/id_rsa Username:docker}
	I0811 23:43:57.473719  137978 ssh_runner.go:195] Run: systemctl --version
	I0811 23:43:57.621143  137978 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:43:57.768219  137978 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:43:57.773617  137978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:43:57.784098  137978 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:43:57.784197  137978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:43:57.794753  137978 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0811 23:43:57.794777  137978 start.go:466] detecting cgroup driver to use...
	I0811 23:43:57.794834  137978 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0811 23:43:57.794896  137978 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:43:57.808935  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:43:57.822796  137978 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:43:57.822863  137978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:43:57.838337  137978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:43:57.852425  137978 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:43:57.990740  137978 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:43:58.120172  137978 docker.go:212] disabling docker service ...
	I0811 23:43:58.120238  137978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:43:58.135069  137978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:43:58.149954  137978 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:43:58.272733  137978 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:43:58.394487  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:43:58.418950  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:43:58.466592  137978 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0811 23:43:58.466662  137978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:43:58.482533  137978 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0811 23:43:58.482598  137978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:43:58.532665  137978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:43:58.634761  137978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:43:58.680414  137978 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:43:58.710047  137978 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:43:58.762059  137978 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:43:58.795998  137978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:43:59.088619  137978 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:43:59.470401  137978 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:43:59.470475  137978 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:43:59.476211  137978 start.go:534] Will wait 60s for crictl version
	I0811 23:43:59.476274  137978 ssh_runner.go:195] Run: which crictl
	I0811 23:43:59.481375  137978 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:43:59.531358  137978 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:43:59.531443  137978 ssh_runner.go:195] Run: crio --version
	I0811 23:43:59.607420  137978 ssh_runner.go:195] Run: crio --version
	I0811 23:43:59.677555  137978 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0811 23:43:59.679185  137978 cli_runner.go:164] Run: docker network inspect pause-634825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:43:59.707274  137978 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0811 23:43:59.713071  137978 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:43:59.713154  137978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:43:59.767937  137978 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:43:59.767962  137978 crio.go:415] Images already preloaded, skipping extraction
	I0811 23:43:59.768018  137978 ssh_runner.go:195] Run: sudo crictl images --output json
	I0811 23:43:59.809179  137978 crio.go:496] all images are preloaded for cri-o runtime.
	I0811 23:43:59.809199  137978 cache_images.go:84] Images are preloaded, skipping loading
	I0811 23:43:59.809296  137978 ssh_runner.go:195] Run: crio config
	I0811 23:43:59.864230  137978 cni.go:84] Creating CNI manager for ""
	I0811 23:43:59.864255  137978 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:43:59.864295  137978 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0811 23:43:59.864320  137978 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-634825 NodeName:pause-634825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0811 23:43:59.864520  137978 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-634825"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0811 23:43:59.864608  137978 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-634825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:pause-634825 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0811 23:43:59.864705  137978 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0811 23:43:59.875572  137978 binaries.go:44] Found k8s binaries, skipping transfer
	I0811 23:43:59.875678  137978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0811 23:43:59.886315  137978 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0811 23:43:59.907429  137978 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0811 23:43:59.928888  137978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0811 23:43:59.950502  137978 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0811 23:43:59.955039  137978 certs.go:56] Setting up /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825 for IP: 192.168.76.2
	I0811 23:43:59.955077  137978 certs.go:190] acquiring lock for shared ca certs: {Name:mk92ef0e52f7a4bf6e55e35fe7431dc846a67439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:43:59.955222  137978 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key
	I0811 23:43:59.955268  137978 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key
	I0811 23:43:59.955346  137978 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.key
	I0811 23:43:59.955415  137978 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/apiserver.key.31bdca25
	I0811 23:43:59.955466  137978 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/proxy-client.key
	I0811 23:43:59.955589  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem (1338 bytes)
	W0811 23:43:59.955623  137978 certs.go:433] ignoring /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634_empty.pem, impossibly tiny 0 bytes
	I0811 23:43:59.955635  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem (1675 bytes)
	I0811 23:43:59.955668  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem (1082 bytes)
	I0811 23:43:59.955698  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem (1123 bytes)
	I0811 23:43:59.955726  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem (1675 bytes)
	I0811 23:43:59.955774  137978 certs.go:437] found cert: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:43:59.956404  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0811 23:43:59.985290  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0811 23:44:00.026106  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0811 23:44:00.157745  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0811 23:44:00.231461  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0811 23:44:00.305163  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0811 23:44:00.339358  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0811 23:44:00.373265  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0811 23:44:00.405480  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /usr/share/ca-certificates/76342.pem (1708 bytes)
	I0811 23:44:00.437406  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0811 23:44:00.467584  137978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/7634.pem --> /usr/share/ca-certificates/7634.pem (1338 bytes)
	I0811 23:44:00.496542  137978 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0811 23:44:00.517905  137978 ssh_runner.go:195] Run: openssl version
	I0811 23:44:00.525214  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0811 23:44:00.537724  137978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:44:00.546435  137978 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 11 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:44:00.546551  137978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0811 23:44:00.555903  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0811 23:44:00.566835  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7634.pem && ln -fs /usr/share/ca-certificates/7634.pem /etc/ssl/certs/7634.pem"
	I0811 23:44:00.578493  137978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7634.pem
	I0811 23:44:00.583285  137978 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 11 23:09 /usr/share/ca-certificates/7634.pem
	I0811 23:44:00.583397  137978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7634.pem
	I0811 23:44:00.591833  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7634.pem /etc/ssl/certs/51391683.0"
	I0811 23:44:00.603237  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76342.pem && ln -fs /usr/share/ca-certificates/76342.pem /etc/ssl/certs/76342.pem"
	I0811 23:44:00.615756  137978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76342.pem
	I0811 23:44:00.620481  137978 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 11 23:09 /usr/share/ca-certificates/76342.pem
	I0811 23:44:00.620582  137978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76342.pem
	I0811 23:44:00.628997  137978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76342.pem /etc/ssl/certs/3ec20f2e.0"
	I0811 23:44:00.639840  137978 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0811 23:44:00.644249  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0811 23:44:00.652447  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0811 23:44:00.660946  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0811 23:44:00.669443  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0811 23:44:00.678270  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0811 23:44:00.686616  137978 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0811 23:44:00.694928  137978 kubeadm.go:404] StartCluster: {Name:pause-634825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-634825 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:44:00.695040  137978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0811 23:44:00.695108  137978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0811 23:44:00.735665  137978 cri.go:89] found id: "d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e"
	I0811 23:44:00.735735  137978 cri.go:89] found id: "d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1"
	I0811 23:44:00.735754  137978 cri.go:89] found id: "c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8"
	I0811 23:44:00.735768  137978 cri.go:89] found id: "1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c"
	I0811 23:44:00.735773  137978 cri.go:89] found id: "24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7"
	I0811 23:44:00.735778  137978 cri.go:89] found id: "6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9"
	I0811 23:44:00.735782  137978 cri.go:89] found id: "9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a"
	I0811 23:44:00.735786  137978 cri.go:89] found id: ""
	I0811 23:44:00.735862  137978 ssh_runner.go:195] Run: sudo runc list -f json
	I0811 23:44:00.763259  137978 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c/userdata","rootfs":"/var/lib/containers/storage/overlay/0501c50ae79c9085f82bf070866a80cdeb486a8ac89b20189e7e2f1f737b15e7/merged","created":"2023-08-11T23:43:58.798651583Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"89aca820","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"89aca820\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.493561785Z","io.kubernetes.cri-o.Image":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.4","io.kubernetes.cri-o.ImageRef":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-sptbv\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f9473cb2-7b87-40e8-891a-aa651f27406d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-sptbv_f9473cb2-7b87-40e8-891a-aa651f27406d/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPo
int":"/var/lib/containers/storage/overlay/0501c50ae79c9085f82bf070866a80cdeb486a8ac89b20189e7e2f1f737b15e7/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-sptbv_kube-system_f9473cb2-7b87-40e8-891a-aa651f27406d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ef63d5ca5c5deeff0492472fbca6f69ccde92debf681996838d5fda80989c9d7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ef63d5ca5c5deeff0492472fbca6f69ccde92debf681996838d5fda80989c9d7","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-sptbv_kube-system_f9473cb2-7b87-40e8-891a-aa651f27406d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagatio
n\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f9473cb2-7b87-40e8-891a-aa651f27406d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f9473cb2-7b87-40e8-891a-aa651f27406d/containers/kube-proxy/b1902c83\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f9473cb2-7b87-40e8-891a-aa651f27406d/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f9473cb2-7b87-40e8-891a-aa651f27406d/volumes/kubernetes.io~projected/kube-api-access-5khfj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-sptbv","io.kubernetes.pod.namespace":"kube-system","io.kub
ernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f9473cb2-7b87-40e8-891a-aa651f27406d","kubernetes.io/config.seen":"2023-08-11T23:43:15.100778975Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7/userdata","rootfs":"/var/lib/containers/storage/overlay/a890e537fdc9bc39e591d626566fdc36822202ab8c89f1a0ace70f59ca6e515a/merged","created":"2023-08-11T23:43:58.677813212Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8fd9444b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8fd9444b\",
\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.485661475Z","io.kubernetes.cri-o.Image":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.4","io.kubernetes.cri-o.ImageRef":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-634825\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4d31603afc8fe54361d4f33387c8c085\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-sy
stem_kube-apiserver-pause-634825_4d31603afc8fe54361d4f33387c8c085/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a890e537fdc9bc39e591d626566fdc36822202ab8c89f1a0ace70f59ca6e515a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-634825_kube-system_4d31603afc8fe54361d4f33387c8c085_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1939f6bc8aca76c1e4af615070946fbd836864e2e4784b548dce01311412b5af/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1939f6bc8aca76c1e4af615070946fbd836864e2e4784b548dce01311412b5af","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-634825_kube-system_4d31603afc8fe54361d4f33387c8c085_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_p
ath\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4d31603afc8fe54361d4f33387c8c085/containers/kube-apiserver/e48e0b8b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4d31603afc8fe54361d4f33387c8c085/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-cer
tificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-634825","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4d31603afc8fe54361d4f33387c8c085","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.76.2:8443","kubernetes.io/config.hash":"4d31603afc8fe54361d4f33387c8c085","kubernetes.io/config.seen":"2023-08-11T23:42:52.231039480Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9/userdata","rootfs":"/var/lib/containers/storage/overlay/0b6fea8aec3a9e39ca315f9397a1212f72f6969a6e0d0d86775cb2f53d0b13f8/merged","created":"2023-08-11T23:43:58.603399288Z","annotation
s":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"3fe912dd","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"3fe912dd\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.453036692Z","io.kubernetes.cri-o.Image":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"24bc64e911039
ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-634825\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"94bb06b62e5d03697e8916b3ea923f95\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-634825_94bb06b62e5d03697e8916b3ea923f95/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0b6fea8aec3a9e39ca315f9397a1212f72f6969a6e0d0d86775cb2f53d0b13f8/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-634825_kube-system_94bb06b62e5d03697e8916b3ea923f95_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/67766735af340b847e8e0fee1da6270726e0b0d1f8b20b47dae15bb4696258a3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"67766735af340b847e8e0fee1da6270726e0b0d1f8b20b47dae15bb4696258a3","io.kubernetes.cri-o.SandboxName":"k8s_e
tcd-pause-634825_kube-system_94bb06b62e5d03697e8916b3ea923f95_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/94bb06b62e5d03697e8916b3ea923f95/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/94bb06b62e5d03697e8916b3ea923f95/containers/etcd/47432f3c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-634825","io.ku
bernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"94bb06b62e5d03697e8916b3ea923f95","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"94bb06b62e5d03697e8916b3ea923f95","kubernetes.io/config.seen":"2023-08-11T23:42:52.231034130Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a/userdata","rootfs":"/var/lib/containers/storage/overlay/1febffb76c5600f2b310f420dd1bd8b73f8f7edd57e841d82323bcc6a6e25bda/merged","created":"2023-08-11T23:43:58.697567894Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"373e41ff","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.ter
minationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"373e41ff\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.432842516Z","io.kubernetes.cri-o.Image":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.27.4","io.kubernetes.cri-o.ImageRef":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-schedu
ler-pause-634825\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1cafe5bb6332608ac4a5fcb2ea6c499f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-634825_1cafe5bb6332608ac4a5fcb2ea6c499f/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1febffb76c5600f2b310f420dd1bd8b73f8f7edd57e841d82323bcc6a6e25bda/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-634825_kube-system_1cafe5bb6332608ac4a5fcb2ea6c499f_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b8ff3a6415a63ddaf8355be3898e23e08fc25146775c24f362d47fea0a5f1a9d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b8ff3a6415a63ddaf8355be3898e23e08fc25146775c24f362d47fea0a5f1a9d","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-634825_kube-system_1cafe5bb6332608ac4a5fcb2ea6c499f_0","io.kubernetes.cri-o.SeccompPro
filePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1cafe5bb6332608ac4a5fcb2ea6c499f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1cafe5bb6332608ac4a5fcb2ea6c499f/containers/kube-scheduler/899871c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-634825","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1cafe5bb6332608ac4a5fcb2ea6c499f","kubernetes.io/config.hash":"1cafe5bb6332608ac4a5fcb2ea6c499f","kubernetes.io/co
nfig.seen":"2023-08-11T23:42:52.231041958Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8/userdata","rootfs":"/var/lib/containers/storage/overlay/baa160bd83925e3da6ec37eaf9147c5517809246a81395e13d1aba6d6c0bc3e4/merged","created":"2023-08-11T23:43:58.721040641Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"57a69d01","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"57a69d01\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",
\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.525899361Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-q6qpq\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cf34f80e-7018-4003-b7c5-94c7c8ea41da\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-q6qpq_cf34f80e-7018-4003-b7c5-94c7c8ea41da/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cn
i\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/baa160bd83925e3da6ec37eaf9147c5517809246a81395e13d1aba6d6c0bc3e4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-q6qpq_kube-system_cf34f80e-7018-4003-b7c5-94c7c8ea41da_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7e7075607dcbe29b71e83494f2d6e7bf7aeb9ae7400c5d010aa551dac55bca68/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7e7075607dcbe29b71e83494f2d6e7bf7aeb9ae7400c5d010aa551dac55bca68","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-q6qpq_kube-system_cf34f80e-7018-4003-b7c5-94c7c8ea41da_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/l
ib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cf34f80e-7018-4003-b7c5-94c7c8ea41da/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cf34f80e-7018-4003-b7c5-94c7c8ea41da/containers/kindnet-cni/f4754316\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cf34f80e-7018-4003-b7c5-94c7c8ea41da/volumes/kubernetes.io~projected/kube-api-access-pqb66\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-q6qpq","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kuber
netes.pod.uid":"cf34f80e-7018-4003-b7c5-94c7c8ea41da","kubernetes.io/config.seen":"2023-08-11T23:43:15.098069526Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e/userdata","rootfs":"/var/lib/containers/storage/overlay/5a9180cdc1948b4f86a089b177ee3c9586e0b29c6f4d16814db781e693b667d9/merged","created":"2023-08-11T23:43:58.73824338Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"aa1b7757","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"aa1b7757\",\"io.kubernetes.container.restartCount\":\
"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.57501702Z","io.kubernetes.cri-o.Image":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.4","io.kubernetes.cri-o.ImageRef":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-634825\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8650f5470ccad9750561aebc3422b023\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-contro
ller-manager-pause-634825_8650f5470ccad9750561aebc3422b023/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5a9180cdc1948b4f86a089b177ee3c9586e0b29c6f4d16814db781e693b667d9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-634825_kube-system_8650f5470ccad9750561aebc3422b023_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2027f21f45cb4916556f6d7686a845dda0aa9530d573d870beda9b7ee4bafe20/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2027f21f45cb4916556f6d7686a845dda0aa9530d573d870beda9b7ee4bafe20","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-634825_kube-system_8650f5470ccad9750561aebc3422b023_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kube
rnetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8650f5470ccad9750561aebc3422b023/containers/kube-controller-manager/02ca4312\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8650f5470ccad9750561aebc3422b023/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagati
on\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-634825","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8650f5470ccad9750561aebc3422b023","kubernetes.io/config.hash":"8650f5470ccad9750561aebc3422b023","kubernetes.io/config.seen":"2023-08-11T23:42:52.231040940Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d529e1a8dc40a6ab1f2b51e1d86c5
819a7cae02e60dcc86a018f72ddb94523f1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1/userdata","rootfs":"/var/lib/containers/storage/overlay/132782d0dd6e0fc0117e193ddf7f4d328127578564f046be0a87996ec7701b4e/merged","created":"2023-08-11T23:43:58.727591978Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1406d2b4","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1406d2b4\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dn
s\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-11T23:43:58.536013775Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"
{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-7zz5s\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e3005957-3506-4ea1-a12a-1961a28c67d4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-7zz5s_e3005957-3506-4ea1-a12a-1961a28c67d4/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/132782d0dd6e0fc0117e193ddf7f4d328127578564f046be0a87996ec7701b4e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-7zz5s_kube-system_e3005957-3506-4ea1-a12a-1961a28c67d4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a48e172e7e9f0377f2066cba42f691ab5a47b2a04a9c0ae2fab01ccd31ddbf5b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a48e172e7e9f0377f2066cba42f691ab5a47b2a04a9c0ae2fab01ccd31ddbf5b","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-7zz5s_kube-system_e3
005957-3506-4ea1-a12a-1961a28c67d4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/e3005957-3506-4ea1-a12a-1961a28c67d4/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e3005957-3506-4ea1-a12a-1961a28c67d4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e3005957-3506-4ea1-a12a-1961a28c67d4/containers/coredns/dd5b4670\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e3005957-3506-4ea1-a12a-1961a28c67d4/volumes/kubernetes.io~projected/
kube-api-access-89glp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-7zz5s","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e3005957-3506-4ea1-a12a-1961a28c67d4","kubernetes.io/config.seen":"2023-08-11T23:43:46.026121822Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0811 23:44:00.763836  137978 cri.go:126] list returned 7 containers
	I0811 23:44:00.763857  137978 cri.go:129] container: {ID:1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c Status:stopped}
	I0811 23:44:00.763872  137978 cri.go:135] skipping {1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763885  137978 cri.go:129] container: {ID:24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7 Status:stopped}
	I0811 23:44:00.763895  137978 cri.go:135] skipping {24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7 stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763902  137978 cri.go:129] container: {ID:6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9 Status:stopped}
	I0811 23:44:00.763913  137978 cri.go:135] skipping {6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9 stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763919  137978 cri.go:129] container: {ID:9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a Status:stopped}
	I0811 23:44:00.763928  137978 cri.go:135] skipping {9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763942  137978 cri.go:129] container: {ID:c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8 Status:stopped}
	I0811 23:44:00.763950  137978 cri.go:135] skipping {c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8 stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763958  137978 cri.go:129] container: {ID:d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e Status:stopped}
	I0811 23:44:00.763968  137978 cri.go:135] skipping {d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e stopped}: state = "stopped", want "paused"
	I0811 23:44:00.763976  137978 cri.go:129] container: {ID:d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1 Status:stopped}
	I0811 23:44:00.763986  137978 cri.go:135] skipping {d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1 stopped}: state = "stopped", want "paused"
	I0811 23:44:00.764038  137978 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0811 23:44:00.774957  137978 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0811 23:44:00.774980  137978 kubeadm.go:636] restartCluster start
	I0811 23:44:00.775036  137978 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0811 23:44:00.785001  137978 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:00.785746  137978 kubeconfig.go:92] found "pause-634825" server: "https://192.168.76.2:8443"
	I0811 23:44:00.786726  137978 kapi.go:59] client config for pause-634825: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:44:00.787621  137978 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0811 23:44:00.797949  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:00.798053  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:00.809620  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:00.809642  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:00.809688  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:00.821732  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:01.322388  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:01.322490  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:01.335409  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:01.821855  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:01.821961  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:01.834043  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:02.322195  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:02.322317  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:02.334406  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:02.821881  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:02.821992  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:02.834072  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:03.322260  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:03.322340  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:03.334680  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:03.821917  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:03.822040  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:03.834397  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:04.321927  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:04.322057  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:04.334724  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:04.822453  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:04.822575  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:04.834753  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:05.322508  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:05.322607  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:05.335189  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:05.822556  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:05.822677  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:05.834768  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:06.322442  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:06.322542  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:06.335266  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:06.821852  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:06.821933  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:06.837272  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:07.321852  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:07.321946  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:07.335129  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:07.822718  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:07.822792  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:07.835688  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:08.321880  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:08.322000  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:08.336548  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:08.821897  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:08.821992  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:08.835838  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:09.322460  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:09.322550  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:09.335905  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:09.821868  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:09.821954  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:09.837548  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:10.321885  137978 api_server.go:166] Checking apiserver status ...
	I0811 23:44:10.322004  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0811 23:44:10.348145  137978 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:10.798615  137978 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0811 23:44:10.798649  137978 kubeadm.go:1128] stopping kube-system containers ...
	I0811 23:44:10.798662  137978 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0811 23:44:10.798733  137978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0811 23:44:10.868350  137978 cri.go:89] found id: "4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd"
	I0811 23:44:10.868369  137978 cri.go:89] found id: "792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96"
	I0811 23:44:10.868374  137978 cri.go:89] found id: "d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e"
	I0811 23:44:10.868379  137978 cri.go:89] found id: "d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1"
	I0811 23:44:10.868383  137978 cri.go:89] found id: "c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8"
	I0811 23:44:10.868388  137978 cri.go:89] found id: "1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c"
	I0811 23:44:10.868392  137978 cri.go:89] found id: "24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7"
	I0811 23:44:10.868396  137978 cri.go:89] found id: "6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9"
	I0811 23:44:10.868400  137978 cri.go:89] found id: "9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a"
	I0811 23:44:10.868409  137978 cri.go:89] found id: ""
	I0811 23:44:10.868415  137978 cri.go:234] Stopping containers: [4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd 792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96 d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1 c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8 1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c 24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7 6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9 9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a]
	I0811 23:44:10.868468  137978 ssh_runner.go:195] Run: which crictl
	I0811 23:44:10.877428  137978 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd 792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96 d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1 c65c1e64ca450ac7b48648607efcd219484ec0064b2c988f1289ce1b78b07db8 1f79bbe2d8cb2b2e137e912e515a29f46e821351a0ac134c781281b119acc66c 24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7 6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9 9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a
	I0811 23:44:11.290918  137978 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0811 23:44:11.400093  137978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0811 23:44:11.410953  137978 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 11 23:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Aug 11 23:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug 11 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 11 23:42 /etc/kubernetes/scheduler.conf
	
	I0811 23:44:11.411022  137978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0811 23:44:11.422806  137978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0811 23:44:11.434104  137978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0811 23:44:11.444642  137978 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:11.444708  137978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0811 23:44:11.455195  137978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0811 23:44:11.465782  137978 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0811 23:44:11.465843  137978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0811 23:44:11.476515  137978 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0811 23:44:11.487275  137978 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0811 23:44:11.487297  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:11.563057  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:12.992952  137978 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.429821803s)
	I0811 23:44:12.993028  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:13.285794  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:13.493678  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:13.803925  137978 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:44:13.804062  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:13.831510  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:14.365908  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:14.866295  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:15.365349  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:15.406280  137978 api_server.go:72] duration metric: took 1.602355554s to wait for apiserver process to appear ...
	I0811 23:44:15.406306  137978 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:44:15.406333  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:15.406592  137978 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0811 23:44:15.406623  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:15.406758  137978 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0811 23:44:15.906889  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:20.907537  137978 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0811 23:44:20.907580  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:23.225456  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:44:23.225486  137978 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:44:23.225498  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:23.254841  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0811 23:44:23.254875  137978 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0811 23:44:23.407119  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:23.417382  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:44:23.417411  137978 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:44:23.907534  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:23.961290  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:44:23.961356  137978 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:44:24.407581  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:24.420759  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0811 23:44:24.420782  137978 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0811 23:44:24.909224  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:24.922363  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0811 23:44:24.948546  137978 api_server.go:141] control plane version: v1.27.4
	I0811 23:44:24.948570  137978 api_server.go:131] duration metric: took 9.542257762s to wait for apiserver health ...
	I0811 23:44:24.948581  137978 cni.go:84] Creating CNI manager for ""
	I0811 23:44:24.948587  137978 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:44:24.951831  137978 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0811 23:44:24.954158  137978 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:44:24.961993  137978 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:44:24.962017  137978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:44:25.015407  137978 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:44:26.103752  137978 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.088308985s)
	I0811 23:44:26.103781  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:26.128275  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:26.128342  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:26.128369  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0811 23:44:26.128392  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:26.128436  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0811 23:44:26.128460  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 23:44:26.128481  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:26.128539  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 23:44:26.128656  137978 system_pods.go:74] duration metric: took 24.868164ms to wait for pod list to return data ...
	I0811 23:44:26.128686  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:26.132617  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:26.132645  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:26.132657  137978 node_conditions.go:105] duration metric: took 3.955177ms to run NodePressure ...
	I0811 23:44:26.132674  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:26.389037  137978 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396842  137978 kubeadm.go:787] kubelet initialised
	I0811 23:44:26.396864  137978 kubeadm.go:788] duration metric: took 7.807298ms waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396873  137978 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:26.403326  137978 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413614  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:26.413642  137978 pod_ready.go:81] duration metric: took 10.286092ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413656  137978 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:28.435506  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:30.965622  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:31.434478  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.434505  137978 pod_ready.go:81] duration metric: took 5.020841635s waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.434520  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440825  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.440851  137978 pod_ready.go:81] duration metric: took 6.323103ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440863  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447339  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.447372  137978 pod_ready.go:81] duration metric: took 6.494469ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447384  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455388  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.455407  137978 pod_ready.go:81] duration metric: took 8.015579ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455417  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707788  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.707813  137978 pod_ready.go:81] duration metric: took 252.388977ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707823  137978 pod_ready.go:38] duration metric: took 5.310939544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:31.707843  137978 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:44:31.718825  137978 ops.go:34] apiserver oom_adj: -16
	I0811 23:44:31.718842  137978 kubeadm.go:640] restartCluster took 30.943856137s
	I0811 23:44:31.718851  137978 kubeadm.go:406] StartCluster complete in 31.02393291s
	I0811 23:44:31.718865  137978 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.718922  137978 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:31.719576  137978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.720231  137978 kapi.go:59] client config for pause-634825: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:44:31.720696  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:44:31.720950  137978 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:31.720979  137978 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:44:31.725590  137978 out.go:177] * Enabled addons: 
	I0811 23:44:31.723434  137978 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-634825" context rescaled to 1 replicas
	I0811 23:44:31.725697  137978 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:44:31.727670  137978 out.go:177] * Verifying Kubernetes components...
	I0811 23:44:31.729508  137978 addons.go:502] enable addons completed in 8.522741ms: enabled=[]
	I0811 23:44:31.731692  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:31.858559  137978 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0811 23:44:31.858611  137978 node_ready.go:35] waiting up to 6m0s for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909894  137978 node_ready.go:49] node "pause-634825" has status "Ready":"True"
	I0811 23:44:31.909921  137978 node_ready.go:38] duration metric: took 51.295166ms waiting for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909931  137978 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:32.111204  137978 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511502  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.511527  137978 pod_ready.go:81] duration metric: took 400.293104ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511539  137978 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907947  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.907973  137978 pod_ready.go:81] duration metric: took 396.426665ms waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907988  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307880  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.307910  137978 pod_ready.go:81] duration metric: took 399.913525ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307934  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708207  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.708231  137978 pod_ready.go:81] duration metric: took 400.282578ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708244  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107687  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.107707  137978 pod_ready.go:81] duration metric: took 399.456201ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107718  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508256  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.508282  137978 pod_ready.go:81] duration metric: took 400.556034ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508292  137978 pod_ready.go:38] duration metric: took 2.59834828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:34.508306  137978 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:44:34.508358  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:34.524570  137978 api_server.go:72] duration metric: took 2.798782978s to wait for apiserver process to appear ...
	I0811 23:44:34.524597  137978 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:44:34.524613  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:34.536485  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0811 23:44:34.537800  137978 api_server.go:141] control plane version: v1.27.4
	I0811 23:44:34.537824  137978 api_server.go:131] duration metric: took 13.22057ms to wait for apiserver health ...
	I0811 23:44:34.537833  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:34.712104  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:34.712134  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:34.712141  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:34.712146  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:34.712160  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:34.712167  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:34.712173  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:34.712178  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:34.712187  137978 system_pods.go:74] duration metric: took 174.348754ms to wait for pod list to return data ...
	I0811 23:44:34.712196  137978 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:44:34.907438  137978 default_sa.go:45] found service account: "default"
	I0811 23:44:34.907463  137978 default_sa.go:55] duration metric: took 195.261651ms for default service account to be created ...
	I0811 23:44:34.907474  137978 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:44:35.114088  137978 system_pods.go:86] 7 kube-system pods found
	I0811 23:44:35.114179  137978 system_pods.go:89] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:35.114201  137978 system_pods.go:89] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:35.114245  137978 system_pods.go:89] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:35.114270  137978 system_pods.go:89] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:35.114291  137978 system_pods.go:89] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:35.114328  137978 system_pods.go:89] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:35.114357  137978 system_pods.go:89] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:35.114381  137978 system_pods.go:126] duration metric: took 206.901804ms to wait for k8s-apps to be running ...
	I0811 23:44:35.114468  137978 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:44:35.114564  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:35.131486  137978 system_svc.go:56] duration metric: took 17.008961ms WaitForService to wait for kubelet.
	I0811 23:44:35.131510  137978 kubeadm.go:581] duration metric: took 3.405728386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:44:35.131531  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:35.310376  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:35.310412  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:35.310423  137978 node_conditions.go:105] duration metric: took 178.887099ms to run NodePressure ...
	I0811 23:44:35.310434  137978 start.go:228] waiting for startup goroutines ...
	I0811 23:44:35.310449  137978 start.go:233] waiting for cluster config update ...
	I0811 23:44:35.310461  137978 start.go:242] writing updated cluster config ...
	I0811 23:44:35.310854  137978 ssh_runner.go:195] Run: rm -f paused
	I0811 23:44:35.393749  137978 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0811 23:44:35.396926  137978 out.go:177] * Done! kubectl is now configured to use "pause-634825" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-634825
helpers_test.go:235: (dbg) docker inspect pause-634825:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e",
	        "Created": "2023-08-11T23:42:35.958696069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:42:36.349359016Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:abe4482d178dd08cce0cdcb8e444349673c3edfa8e7d6462144a8d9173479eb6",
	        "ResolvConfPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/hosts",
	        "LogPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e-json.log",
	        "Name": "/pause-634825",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-634825:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-634825",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313-init/diff:/var/lib/docker/overlay2/9f8bf17bd2eed1bf502486fc30f9be0589884e58aed50b5fbf77bc48ebc9a592/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-634825",
	                "Source": "/var/lib/docker/volumes/pause-634825/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-634825",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-634825",
	                "name.minikube.sigs.k8s.io": "pause-634825",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd139406558147eeaa5a424b5314ec71fc805e07b0a9f9ebd7fa779c74b2b152",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32962"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd1394065581",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-634825": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7570f01b50c0",
	                        "pause-634825"
	                    ],
	                    "NetworkID": "d336f126a2be1940786fbc43fe7ddf30c0d968fec797cc397b6c954713823928",
	                    "EndpointID": "c211db29f9840c3b753c3e7babab5a783d4d6285613b88b99c62c64e4e85054a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-634825 -n pause-634825
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-634825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-634825 logs -n 25: (2.149906443s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:36 UTC |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:36 UTC | 11 Aug 23 23:37 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-886838 sudo       | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-886838 sudo       | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:38 UTC |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:38 UTC | 11 Aug 23 23:39 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-550468         | missing-upgrade-550468    | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:39 UTC |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:43 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-550468         | missing-upgrade-550468    | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:39 UTC |
	| start   | -p stopped-upgrade-773979         | stopped-upgrade-773979    | jenkins | v1.31.1 | 11 Aug 23 23:41 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-773979         | stopped-upgrade-773979    | jenkins | v1.31.1 | 11 Aug 23 23:41 UTC | 11 Aug 23 23:41 UTC |
	| start   | -p running-upgrade-341136         | running-upgrade-341136    | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-341136         | running-upgrade-341136    | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC | 11 Aug 23 23:42 UTC |
	| start   | -p pause-634825 --memory=2048     | pause-634825              | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC | 11 Aug 23 23:43 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-634825                   | pause-634825              | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC | 11 Aug 23 23:44 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC | 11 Aug 23 23:44 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:44 UTC | 11 Aug 23 23:44 UTC |
	| start   | -p force-systemd-flag-847326      | force-systemd-flag-847326 | jenkins | v1.31.1 | 11 Aug 23 23:44 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:44:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:44:25.957152  141718 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:44:25.957317  141718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:25.957325  141718 out.go:309] Setting ErrFile to fd 2...
	I0811 23:44:25.957331  141718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:25.957601  141718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:44:25.958009  141718 out.go:303] Setting JSON to false
	I0811 23:44:25.959045  141718 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5214,"bootTime":1691792252,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:44:25.959119  141718 start.go:138] virtualization:  
	I0811 23:44:25.962521  141718 out.go:177] * [force-systemd-flag-847326] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:44:25.964424  141718 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:44:25.964505  141718 notify.go:220] Checking for updates...
	I0811 23:44:25.969742  141718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:44:25.971475  141718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:25.973132  141718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:44:25.974776  141718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:44:25.976263  141718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:44:25.982366  141718 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:25.982557  141718 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:44:26.027228  141718 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:44:26.027324  141718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:44:26.160610  141718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:44:26.150287761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:44:26.160718  141718 docker.go:294] overlay module found
	I0811 23:44:26.163379  141718 out.go:177] * Using the docker driver based on user configuration
	I0811 23:44:26.165286  141718 start.go:298] selected driver: docker
	I0811 23:44:26.165303  141718 start.go:901] validating driver "docker" against <nil>
	I0811 23:44:26.165316  141718 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:44:26.165933  141718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:44:26.280345  141718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:44:26.269460259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:44:26.280500  141718 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:44:26.280711  141718 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 23:44:26.282639  141718 out.go:177] * Using Docker driver with root privileges
	I0811 23:44:26.284699  141718 cni.go:84] Creating CNI manager for ""
	I0811 23:44:26.284720  141718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:44:26.284729  141718 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:44:26.284756  141718 start_flags.go:319] config:
	{Name:force-systemd-flag-847326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-847326 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:44:26.287147  141718 out.go:177] * Starting control plane node force-systemd-flag-847326 in cluster force-systemd-flag-847326
	I0811 23:44:26.289024  141718 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:44:26.290916  141718 out.go:177] * Pulling base image ...
	I0811 23:44:26.292628  141718 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:44:26.292681  141718 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:44:26.292692  141718 cache.go:57] Caching tarball of preloaded images
	I0811 23:44:26.292708  141718 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:44:26.292784  141718 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:44:26.292794  141718 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:44:26.292907  141718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json ...
	I0811 23:44:26.292925  141718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json: {Name:mk0d5fddc1d5c8d8c581afb5bc750de470eaa853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:26.315040  141718 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:44:26.315061  141718 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:44:26.315084  141718 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:44:26.315214  141718 start.go:365] acquiring machines lock for force-systemd-flag-847326: {Name:mka36b9178bdba7c081cc28eabba2f8f60b312c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:44:26.315411  141718 start.go:369] acquired machines lock for "force-systemd-flag-847326" in 174.056µs
	I0811 23:44:26.315476  141718 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-847326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-847326 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:44:26.315580  141718 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:44:24.954158  137978 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:44:24.961993  137978 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:44:24.962017  137978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:44:25.015407  137978 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:44:26.103752  137978 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.088308985s)
	I0811 23:44:26.103781  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:26.128275  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:26.128342  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:26.128369  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0811 23:44:26.128392  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:26.128436  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0811 23:44:26.128460  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 23:44:26.128481  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:26.128539  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 23:44:26.128656  137978 system_pods.go:74] duration metric: took 24.868164ms to wait for pod list to return data ...
	I0811 23:44:26.128686  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:26.132617  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:26.132645  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:26.132657  137978 node_conditions.go:105] duration metric: took 3.955177ms to run NodePressure ...
	I0811 23:44:26.132674  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:26.389037  137978 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396842  137978 kubeadm.go:787] kubelet initialised
	I0811 23:44:26.396864  137978 kubeadm.go:788] duration metric: took 7.807298ms waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396873  137978 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:26.403326  137978 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413614  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:26.413642  137978 pod_ready.go:81] duration metric: took 10.286092ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413656  137978 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:28.435506  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:26.318152  141718 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0811 23:44:26.318553  141718 start.go:159] libmachine.API.Create for "force-systemd-flag-847326" (driver="docker")
	I0811 23:44:26.318574  141718 client.go:168] LocalClient.Create starting
	I0811 23:44:26.318729  141718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:44:26.318816  141718 main.go:141] libmachine: Decoding PEM data...
	I0811 23:44:26.318835  141718 main.go:141] libmachine: Parsing certificate...
	I0811 23:44:26.318993  141718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:44:26.319029  141718 main.go:141] libmachine: Decoding PEM data...
	I0811 23:44:26.319041  141718 main.go:141] libmachine: Parsing certificate...
	I0811 23:44:26.319865  141718 cli_runner.go:164] Run: docker network inspect force-systemd-flag-847326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:44:26.343555  141718 cli_runner.go:211] docker network inspect force-systemd-flag-847326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:44:26.343641  141718 network_create.go:281] running [docker network inspect force-systemd-flag-847326] to gather additional debugging logs...
	I0811 23:44:26.343658  141718 cli_runner.go:164] Run: docker network inspect force-systemd-flag-847326
	W0811 23:44:26.368853  141718 cli_runner.go:211] docker network inspect force-systemd-flag-847326 returned with exit code 1
	I0811 23:44:26.368886  141718 network_create.go:284] error running [docker network inspect force-systemd-flag-847326]: docker network inspect force-systemd-flag-847326: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-847326 not found
	I0811 23:44:26.368899  141718 network_create.go:286] output of [docker network inspect force-systemd-flag-847326]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-847326 not found
	
	** /stderr **
	I0811 23:44:26.368961  141718 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:44:26.393814  141718 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb015cdafab9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3c:25:af:38} reservation:<nil>}
	I0811 23:44:26.394205  141718 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c2f4372f433a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:1d:72:42:dd} reservation:<nil>}
	I0811 23:44:26.394708  141718 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b7f7f0}
	I0811 23:44:26.394726  141718 network_create.go:123] attempt to create docker network force-systemd-flag-847326 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0811 23:44:26.394780  141718 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-847326 force-systemd-flag-847326
	I0811 23:44:26.497522  141718 network_create.go:107] docker network force-systemd-flag-847326 192.168.67.0/24 created
	I0811 23:44:26.497556  141718 kic.go:117] calculated static IP "192.168.67.2" for the "force-systemd-flag-847326" container
	I0811 23:44:26.497631  141718 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:44:26.515311  141718 cli_runner.go:164] Run: docker volume create force-systemd-flag-847326 --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:44:26.534514  141718 oci.go:103] Successfully created a docker volume force-systemd-flag-847326
	I0811 23:44:26.534601  141718 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-847326-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --entrypoint /usr/bin/test -v force-systemd-flag-847326:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:44:27.145393  141718 oci.go:107] Successfully prepared a docker volume force-systemd-flag-847326
	I0811 23:44:27.145440  141718 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:44:27.145461  141718 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:44:27.145550  141718 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-847326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:44:30.965622  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:31.434478  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.434505  137978 pod_ready.go:81] duration metric: took 5.020841635s waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.434520  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440825  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.440851  137978 pod_ready.go:81] duration metric: took 6.323103ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440863  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447339  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.447372  137978 pod_ready.go:81] duration metric: took 6.494469ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447384  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455388  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.455407  137978 pod_ready.go:81] duration metric: took 8.015579ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455417  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707788  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.707813  137978 pod_ready.go:81] duration metric: took 252.388977ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707823  137978 pod_ready.go:38] duration metric: took 5.310939544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:31.707843  137978 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:44:31.718825  137978 ops.go:34] apiserver oom_adj: -16
	I0811 23:44:31.718842  137978 kubeadm.go:640] restartCluster took 30.943856137s
	I0811 23:44:31.718851  137978 kubeadm.go:406] StartCluster complete in 31.02393291s
	I0811 23:44:31.718865  137978 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.718922  137978 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:31.719576  137978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.720231  137978 kapi.go:59] client config for pause-634825: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:44:31.720696  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:44:31.720950  137978 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:31.720979  137978 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:44:31.725590  137978 out.go:177] * Enabled addons: 
	I0811 23:44:31.723434  137978 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-634825" context rescaled to 1 replicas
	I0811 23:44:31.725697  137978 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:44:31.727670  137978 out.go:177] * Verifying Kubernetes components...
	I0811 23:44:31.729508  137978 addons.go:502] enable addons completed in 8.522741ms: enabled=[]
	I0811 23:44:31.731692  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:31.858559  137978 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0811 23:44:31.858611  137978 node_ready.go:35] waiting up to 6m0s for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909894  137978 node_ready.go:49] node "pause-634825" has status "Ready":"True"
	I0811 23:44:31.909921  137978 node_ready.go:38] duration metric: took 51.295166ms waiting for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909931  137978 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:32.111204  137978 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511502  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.511527  137978 pod_ready.go:81] duration metric: took 400.293104ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511539  137978 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907947  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.907973  137978 pod_ready.go:81] duration metric: took 396.426665ms waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907988  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307880  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.307910  137978 pod_ready.go:81] duration metric: took 399.913525ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307934  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708207  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.708231  137978 pod_ready.go:81] duration metric: took 400.282578ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708244  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107687  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.107707  137978 pod_ready.go:81] duration metric: took 399.456201ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107718  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508256  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.508282  137978 pod_ready.go:81] duration metric: took 400.556034ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508292  137978 pod_ready.go:38] duration metric: took 2.59834828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:34.508306  137978 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:44:34.508358  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:34.524570  137978 api_server.go:72] duration metric: took 2.798782978s to wait for apiserver process to appear ...
	I0811 23:44:34.524597  137978 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:44:34.524613  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:34.536485  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0811 23:44:34.537800  137978 api_server.go:141] control plane version: v1.27.4
	I0811 23:44:34.537824  137978 api_server.go:131] duration metric: took 13.22057ms to wait for apiserver health ...
	I0811 23:44:34.537833  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:34.712104  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:34.712134  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:34.712141  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:34.712146  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:34.712160  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:34.712167  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:34.712173  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:34.712178  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:34.712187  137978 system_pods.go:74] duration metric: took 174.348754ms to wait for pod list to return data ...
	I0811 23:44:34.712196  137978 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:44:34.907438  137978 default_sa.go:45] found service account: "default"
	I0811 23:44:34.907463  137978 default_sa.go:55] duration metric: took 195.261651ms for default service account to be created ...
	I0811 23:44:34.907474  137978 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:44:35.114088  137978 system_pods.go:86] 7 kube-system pods found
	I0811 23:44:35.114179  137978 system_pods.go:89] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:35.114201  137978 system_pods.go:89] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:35.114245  137978 system_pods.go:89] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:35.114270  137978 system_pods.go:89] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:35.114291  137978 system_pods.go:89] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:35.114328  137978 system_pods.go:89] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:35.114357  137978 system_pods.go:89] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:35.114381  137978 system_pods.go:126] duration metric: took 206.901804ms to wait for k8s-apps to be running ...
	I0811 23:44:35.114468  137978 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:44:35.114564  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:35.131486  137978 system_svc.go:56] duration metric: took 17.008961ms WaitForService to wait for kubelet.
	I0811 23:44:35.131510  137978 kubeadm.go:581] duration metric: took 3.405728386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:44:35.131531  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:35.310376  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:35.310412  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:35.310423  137978 node_conditions.go:105] duration metric: took 178.887099ms to run NodePressure ...
	I0811 23:44:35.310434  137978 start.go:228] waiting for startup goroutines ...
	I0811 23:44:35.310449  137978 start.go:233] waiting for cluster config update ...
	I0811 23:44:35.310461  137978 start.go:242] writing updated cluster config ...
	I0811 23:44:35.310854  137978 ssh_runner.go:195] Run: rm -f paused
	I0811 23:44:35.393749  137978 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0811 23:44:35.396926  137978 out.go:177] * Done! kubectl is now configured to use "pause-634825" cluster and "default" namespace by default
	I0811 23:44:31.355632  141718 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-847326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.210044795s)
	I0811 23:44:31.355665  141718 kic.go:199] duration metric: took 4.210202 seconds to extract preloaded images to volume
	W0811 23:44:31.355806  141718 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:44:31.355922  141718 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:44:31.461054  141718 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-847326 --name force-systemd-flag-847326 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-847326 --network force-systemd-flag-847326 --ip 192.168.67.2 --volume force-systemd-flag-847326:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:44:31.879797  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Running}}
	I0811 23:44:31.908554  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:31.944500  141718 cli_runner.go:164] Run: docker exec force-systemd-flag-847326 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:44:32.020865  141718 oci.go:144] the created container "force-systemd-flag-847326" has a running status.
	I0811 23:44:32.020898  141718 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa...
	I0811 23:44:32.764996  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 23:44:32.765044  141718 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:44:32.792160  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:32.818709  141718 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:44:32.818730  141718 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-847326 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:44:32.929929  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:32.955560  141718 machine.go:88] provisioning docker machine ...
	I0811 23:44:32.955587  141718 ubuntu.go:169] provisioning hostname "force-systemd-flag-847326"
	I0811 23:44:32.955651  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:32.988179  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:32.988651  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:32.988671  141718 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-847326 && echo "force-systemd-flag-847326" | sudo tee /etc/hostname
	I0811 23:44:33.172085  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-847326
	
	I0811 23:44:33.172173  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:33.192156  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:33.192583  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:33.192603  141718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-847326' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-847326/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-847326' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:44:33.350821  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:44:33.350843  141718 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:44:33.350863  141718 ubuntu.go:177] setting up certificates
	I0811 23:44:33.350871  141718 provision.go:83] configureAuth start
	I0811 23:44:33.350939  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:33.371037  141718 provision.go:138] copyHostCerts
	I0811 23:44:33.371076  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:44:33.371108  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:44:33.371115  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:44:33.371190  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:44:33.371267  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:44:33.371285  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:44:33.371289  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:44:33.371314  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:44:33.371353  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:44:33.371367  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:44:33.371371  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:44:33.371394  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:44:33.371441  141718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-847326 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-847326]
	I0811 23:44:34.101560  141718 provision.go:172] copyRemoteCerts
	I0811 23:44:34.101630  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:44:34.101670  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.122067  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.231914  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:44:34.231986  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:44:34.262109  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:44:34.262215  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:44:34.292090  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:44:34.292154  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0811 23:44:34.322747  141718 provision.go:86] duration metric: configureAuth took 971.859893ms
	I0811 23:44:34.322773  141718 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:44:34.323007  141718 config.go:182] Loaded profile config "force-systemd-flag-847326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:34.323127  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.343035  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:34.343474  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:34.343499  141718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:44:34.625493  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:44:34.625514  141718 machine.go:91] provisioned docker machine in 1.669936921s
	I0811 23:44:34.625524  141718 client.go:171] LocalClient.Create took 8.306945348s
	I0811 23:44:34.625536  141718 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-847326" took 8.306985274s
	I0811 23:44:34.625544  141718 start.go:300] post-start starting for "force-systemd-flag-847326" (driver="docker")
	I0811 23:44:34.625553  141718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:44:34.625637  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:44:34.625684  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.651528  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.758051  141718 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:44:34.763171  141718 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:44:34.763204  141718 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:44:34.763215  141718 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:44:34.763222  141718 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:44:34.763234  141718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:44:34.763294  141718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:44:34.763389  141718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:44:34.763397  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /etc/ssl/certs/76342.pem
	I0811 23:44:34.763496  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:44:34.774726  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:44:34.806021  141718 start.go:303] post-start completed in 180.463454ms
	I0811 23:44:34.806394  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:34.828916  141718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json ...
	I0811 23:44:34.829270  141718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:44:34.829322  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.846501  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.948000  141718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:44:34.953892  141718 start.go:128] duration metric: createHost completed in 8.638296538s
	I0811 23:44:34.953915  141718 start.go:83] releasing machines lock for "force-systemd-flag-847326", held for 8.638490484s
	I0811 23:44:34.953984  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:34.974966  141718 ssh_runner.go:195] Run: cat /version.json
	I0811 23:44:34.975029  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.975277  141718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:44:34.975329  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:35.003638  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:35.015433  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:35.106424  141718 ssh_runner.go:195] Run: systemctl --version
	I0811 23:44:35.255782  141718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:44:35.414078  141718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:44:35.433313  141718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:44:35.510700  141718 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:44:35.510807  141718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:44:35.611290  141718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:44:35.611314  141718 start.go:466] detecting cgroup driver to use...
	I0811 23:44:35.611327  141718 start.go:470] using "systemd" cgroup driver as enforced via flags
	I0811 23:44:35.611380  141718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:44:35.655732  141718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:44:35.697501  141718 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:44:35.697569  141718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:44:35.724784  141718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:44:35.761653  141718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:44:35.941053  141718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	
	* 
	* ==> CRI-O <==
	* Aug 11 23:44:23 pause-634825 crio[2601]: time="2023-08-11 23:44:23.920556101Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc9c3f65a74c58cd85cfbc049cc879f0e291da2794b867949238551ff73ac6a0/merged/etc/group: no such file or directory"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.040507146Z" level=info msg="Created container b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732: kube-system/coredns-5d78c9869d-7zz5s/coredns" id=88201f77-0fdc-4085-867b-d0e7039028c0 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.041248205Z" level=info msg="Starting container: b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732" id=2b3e26c4-96b4-4807-9110-c9e138db5ee2 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.091959923Z" level=info msg="Created container fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b: kube-system/kindnet-q6qpq/kindnet-cni" id=30574617-4ab4-46cb-8ef3-1ddd8dc48bd2 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.093664518Z" level=info msg="Starting container: fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b" id=b42d14a3-c8c9-4555-9973-a2a55dec4600 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.104806566Z" level=info msg="Started container" PID=3477 containerID=b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732 description=kube-system/coredns-5d78c9869d-7zz5s/coredns id=2b3e26c4-96b4-4807-9110-c9e138db5ee2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a48e172e7e9f0377f2066cba42f691ab5a47b2a04a9c0ae2fab01ccd31ddbf5b
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.122193489Z" level=info msg="Started container" PID=3471 containerID=fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b description=kube-system/kindnet-q6qpq/kindnet-cni id=b42d14a3-c8c9-4555-9973-a2a55dec4600 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e7075607dcbe29b71e83494f2d6e7bf7aeb9ae7400c5d010aa551dac55bca68
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.276815661Z" level=info msg="Created container c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284: kube-system/kube-proxy-sptbv/kube-proxy" id=9324ac04-7570-45b0-983d-da200f0dfb03 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.277580909Z" level=info msg="Starting container: c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284" id=f9d8c716-53b7-465e-99bd-f0ce55f2bd47 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.297686554Z" level=info msg="Started container" PID=3462 containerID=c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284 description=kube-system/kube-proxy-sptbv/kube-proxy id=f9d8c716-53b7-465e-99bd-f0ce55f2bd47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef63d5ca5c5deeff0492472fbca6f69ccde92debf681996838d5fda80989c9d7
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.553592117Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.585432127Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.585469542Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.633420200Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659088541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659123438Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659142179Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677721537Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677753176Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677769242Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698199246Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698240797Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698260473Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.710444072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.710500515Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b33696f9f00d4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   13 seconds ago      Running             coredns                   2                   a48e172e7e9f0       coredns-5d78c9869d-7zz5s
	fd096ea5b2598       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   13 seconds ago      Running             kindnet-cni               3                   7e7075607dcbe       kindnet-q6qpq
	c7c4e7c90ec34       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317   13 seconds ago      Running             kube-proxy                3                   ef63d5ca5c5de       kube-proxy-sptbv
	a8a7f2ce1a045       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388   22 seconds ago      Running             kube-apiserver            2                   1939f6bc8aca7       kube-apiserver-pause-634825
	97fa57d0ed11c       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   22 seconds ago      Running             etcd                      2                   67766735af340       etcd-pause-634825
	bcc7ddc18d12e       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2   22 seconds ago      Running             kube-controller-manager   2                   2027f21f45cb4       kube-controller-manager-pause-634825
	7447e17b7ff99       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085   22 seconds ago      Running             kube-scheduler            2                   b8ff3a6415a63       kube-scheduler-pause-634825
	4425001a8d2b4       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317   26 seconds ago      Exited              kube-proxy                2                   ef63d5ca5c5de       kube-proxy-sptbv
	792bd634ed394       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   26 seconds ago      Exited              kindnet-cni               2                   7e7075607dcbe       kindnet-q6qpq
	d01488efe1422       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2   38 seconds ago      Exited              kube-controller-manager   1                   2027f21f45cb4       kube-controller-manager-pause-634825
	d529e1a8dc40a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   38 seconds ago      Exited              coredns                   1                   a48e172e7e9f0       coredns-5d78c9869d-7zz5s
	24d42e36b289e       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388   38 seconds ago      Exited              kube-apiserver            1                   1939f6bc8aca7       kube-apiserver-pause-634825
	6024f9aef0a28       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   38 seconds ago      Exited              etcd                      1                   67766735af340       etcd-pause-634825
	9ecaf1f2abdda       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085   38 seconds ago      Exited              kube-scheduler            1                   b8ff3a6415a63       kube-scheduler-pause-634825
	
	* 
	* ==> coredns [b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43932 - 41693 "HINFO IN 445594622542569235.3547650646003049932. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015790826s
	
	* 
	* ==> coredns [d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               pause-634825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-634825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=pause-634825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_43_03_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:42:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-634825
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:43:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-634825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 106dbb0f244c44e2b6a5835bc1103596
	  System UUID:                c11e9341-4ad9-484b-a824-aefd9ce8ab2f
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-7zz5s                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     82s
	  kube-system                 etcd-pause-634825                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kindnet-q6qpq                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      82s
	  kube-system                 kube-apiserver-pause-634825             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-pause-634825    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-sptbv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-pause-634825             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x8 over 105s)  kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     95s                  kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  95s                  kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                  kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 95s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           83s                  node-controller  Node pause-634825 event: Registered Node pause-634825 in Controller
	  Normal  NodeReady                52s                  kubelet          Node pause-634825 status is now: NodeReady
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)    kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x8 over 24s)    kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                   node-controller  Node pause-634825 event: Registered Node pause-634825 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000754] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001054] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +0.003010] FS-Cache: Duplicate cookie detected
	[  +0.000685] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000004a8382c
	[  +0.001063] FS-Cache: O-key=[8] '805b3b0000000000'
	[  +0.000759] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=0000000045141c8c
	[  +0.001051] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +2.763262] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=000000003c07f4d4
	[  +0.001127] FS-Cache: O-key=[8] '7f5b3b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=000000006a6921aa
	[  +0.001095] FS-Cache: N-key=[8] '7f5b3b0000000000'
	[  +0.384460] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000084bb64d5
	[  +0.001049] FS-Cache: O-key=[8] '8a5b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001049] FS-Cache: N-key=[8] '8a5b3b0000000000'
	[Aug11 23:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9] <==
	* {"level":"warn","ts":"2023-08-11T23:43:58.997Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.76.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.76.2:2380","--initial-cluster=pause-634825=https://192.168.76.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.76.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.76.2:2380","--name=pause-634825","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/
var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-08-11T23:43:59.012Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-634825","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token"
:"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-08-11T23:43:59.017Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"5.079287ms"}
	{"level":"info","ts":"2023-08-11T23:43:59.082Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-08-11T23:43:59.090Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":448}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 448, applied: 0, lastindex: 448, lastterm: 2]"}
	{"level":"warn","ts":"2023-08-11T23:43:59.108Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	
	* 
	* ==> etcd [97fa57d0ed11c36b87bbcbb8fd6c82944f930e66220cdbdcafe3af34dfe79fb0] <==
	* {"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-11T23:44:15.227Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-11T23:44:15.227Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.406Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-634825 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:44:16.407Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:44:16.411Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-08-11T23:44:16.417Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:44:16.418Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:44:16.421Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:44:16.421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:44:37 up  1:27,  0 users,  load average: 4.06, 3.03, 2.16
	Linux pause-634825 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96] <==
	* I0811 23:44:10.532723       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0811 23:44:10.532789       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0811 23:44:10.532908       1 main.go:116] setting mtu 1500 for CNI 
	I0811 23:44:10.532922       1 main.go:146] kindnetd IP family: "ipv4"
	I0811 23:44:10.532932       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0811 23:44:10.741771       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0811 23:44:10.829384       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kindnet [fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b] <==
	* I0811 23:44:24.225911       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0811 23:44:24.228237       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0811 23:44:24.228475       1 main.go:116] setting mtu 1500 for CNI 
	I0811 23:44:24.229279       1 main.go:146] kindnetd IP family: "ipv4"
	I0811 23:44:24.229333       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0811 23:44:24.553226       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0811 23:44:24.553373       1 main.go:227] handling current node
	I0811 23:44:34.659587       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0811 23:44:34.659972       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7] <==
	* 
	* 
	* ==> kube-apiserver [a8a7f2ce1a04500c1ff5b075c85a3f290ec2a118dbea68eacacbea277af8450d] <==
	* I0811 23:44:22.947829       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0811 23:44:22.947902       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0811 23:44:22.954361       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0811 23:44:22.954381       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	E0811 23:44:23.271701       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0811 23:44:23.297827       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0811 23:44:23.337794       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0811 23:44:23.343825       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0811 23:44:23.344660       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 23:44:23.346669       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 23:44:23.346782       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0811 23:44:23.347545       1 shared_informer.go:318] Caches are synced for configmaps
	I0811 23:44:23.355287       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0811 23:44:23.355355       1 aggregator.go:152] initial CRD sync complete...
	I0811 23:44:23.355400       1 autoregister_controller.go:141] Starting autoregister controller
	I0811 23:44:23.355413       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0811 23:44:23.355422       1 cache.go:39] Caches are synced for autoregister controller
	I0811 23:44:23.355480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0811 23:44:23.422329       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 23:44:24.055180       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0811 23:44:26.085236       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0811 23:44:26.276606       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0811 23:44:26.294868       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0811 23:44:26.366394       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 23:44:26.375063       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [bcc7ddc18d12e3006b96f9fe103e6c1da53af66f951b985ba59f60349a7b8163] <==
	* I0811 23:44:35.730264       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:44:35.730279       1 shared_informer.go:318] Caches are synced for deployment
	I0811 23:44:35.742697       1 shared_informer.go:318] Caches are synced for GC
	I0811 23:44:35.744971       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:44:35.754289       1 shared_informer.go:318] Caches are synced for stateful set
	I0811 23:44:35.754431       1 shared_informer.go:318] Caches are synced for daemon sets
	I0811 23:44:35.763476       1 shared_informer.go:318] Caches are synced for ephemeral
	I0811 23:44:35.773215       1 shared_informer.go:318] Caches are synced for PVC protection
	I0811 23:44:35.774690       1 shared_informer.go:318] Caches are synced for taint
	I0811 23:44:35.774774       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0811 23:44:35.774851       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-634825"
	I0811 23:44:35.774890       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0811 23:44:35.774903       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0811 23:44:35.774917       1 taint_manager.go:211] "Sending events to api server"
	I0811 23:44:35.775457       1 event.go:307] "Event occurred" object="pause-634825" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-634825 event: Registered Node pause-634825 in Controller"
	I0811 23:44:35.775488       1 shared_informer.go:318] Caches are synced for endpoint
	I0811 23:44:35.775519       1 shared_informer.go:318] Caches are synced for HPA
	I0811 23:44:35.775546       1 shared_informer.go:318] Caches are synced for persistent volume
	I0811 23:44:35.775647       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0811 23:44:35.775901       1 shared_informer.go:318] Caches are synced for attach detach
	I0811 23:44:35.782214       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0811 23:44:35.784572       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0811 23:44:36.099928       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:44:36.100060       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0811 23:44:36.100481       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e] <==
	* 
	* 
	* ==> kube-proxy [4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd] <==
	* E0811 23:44:10.522789       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-634825": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284] <==
	* I0811 23:44:24.547550       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0811 23:44:24.547678       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0811 23:44:24.547701       1 server_others.go:554] "Using iptables proxy"
	I0811 23:44:24.739991       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:44:24.740028       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0811 23:44:24.740037       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0811 23:44:24.740052       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0811 23:44:24.740142       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:44:24.740708       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:44:24.740797       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:44:24.742192       1 config.go:188] "Starting service config controller"
	I0811 23:44:24.743804       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:44:24.742368       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:44:24.743929       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:44:24.743099       1 config.go:315] "Starting node config controller"
	I0811 23:44:24.744004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:44:24.845320       1 shared_informer.go:318] Caches are synced for node config
	I0811 23:44:24.845322       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:44:24.845340       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7447e17b7ff999ce6aa012abad756668d5a050f494e6f1aae1596c4c7f9b7a11] <==
	* I0811 23:44:18.698235       1 serving.go:348] Generated self-signed cert in-memory
	I0811 23:44:23.416698       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0811 23:44:23.416804       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:44:23.422723       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0811 23:44:23.422755       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0811 23:44:23.422975       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0811 23:44:23.422999       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:44:23.423026       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0811 23:44:23.423032       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0811 23:44:23.423997       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0811 23:44:23.424123       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0811 23:44:23.527379       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0811 23:44:23.527518       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0811 23:44:23.527629       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a] <==
	* 
	* 
	* ==> kubelet <==
	* Aug 11 23:44:15 pause-634825 kubelet[3206]: E0811 23:44:15.057353    3206 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-634825"
	Aug 11 23:44:16 pause-634825 kubelet[3206]: I0811 23:44:16.659632    3206 kubelet_node_status.go:70] "Attempting to register node" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.386293    3206 kubelet_node_status.go:108] "Node was previously registered" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.386403    3206 kubelet_node_status.go:73] "Successfully registered node" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.389505    3206 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.390942    3206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.520970    3206 apiserver.go:52] "Watching apiserver"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524373    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524475    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524520    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.538811    3206 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.560707    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb66\" (UniqueName: \"kubernetes.io/projected/cf34f80e-7018-4003-b7c5-94c7c8ea41da-kube-api-access-pqb66\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.560830    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khfj\" (UniqueName: \"kubernetes.io/projected/f9473cb2-7b87-40e8-891a-aa651f27406d-kube-api-access-5khfj\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561077    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-xtables-lock\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561333    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9473cb2-7b87-40e8-891a-aa651f27406d-kube-proxy\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561466    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9473cb2-7b87-40e8-891a-aa651f27406d-lib-modules\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561512    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89glp\" (UniqueName: \"kubernetes.io/projected/e3005957-3506-4ea1-a12a-1961a28c67d4-kube-api-access-89glp\") pod \"coredns-5d78c9869d-7zz5s\" (UID: \"e3005957-3506-4ea1-a12a-1961a28c67d4\") " pod="kube-system/coredns-5d78c9869d-7zz5s"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561540    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-cni-cfg\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561564    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-lib-modules\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561607    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9473cb2-7b87-40e8-891a-aa651f27406d-xtables-lock\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561659    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3005957-3506-4ea1-a12a-1961a28c67d4-config-volume\") pod \"coredns-5d78c9869d-7zz5s\" (UID: \"e3005957-3506-4ea1-a12a-1961a28c67d4\") " pod="kube-system/coredns-5d78c9869d-7zz5s"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561735    3206 reconciler.go:41] "Reconciler: start to sync state"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.826011    3206 scope.go:115] "RemoveContainer" containerID="d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.827185    3206 scope.go:115] "RemoveContainer" containerID="4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.828253    3206 scope.go:115] "RemoveContainer" containerID="792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634825 -n pause-634825
helpers_test.go:261: (dbg) Run:  kubectl --context pause-634825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-634825
helpers_test.go:235: (dbg) docker inspect pause-634825:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e",
	        "Created": "2023-08-11T23:42:35.958696069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-11T23:42:36.349359016Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:abe4482d178dd08cce0cdcb8e444349673c3edfa8e7d6462144a8d9173479eb6",
	        "ResolvConfPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/hosts",
	        "LogPath": "/var/lib/docker/containers/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e/7570f01b50c089bb59b90681fa5523c98b6e7c9e207f592f6e30ef71793ffb3e-json.log",
	        "Name": "/pause-634825",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-634825:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-634825",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313-init/diff:/var/lib/docker/overlay2/9f8bf17bd2eed1bf502486fc30f9be0589884e58aed50b5fbf77bc48ebc9a592/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48d96744669f0db6ef1ed51a3bff756ca2ab4c9761a407cc6e9a399f7922b313/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-634825",
	                "Source": "/var/lib/docker/volumes/pause-634825/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-634825",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-634825",
	                "name.minikube.sigs.k8s.io": "pause-634825",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd139406558147eeaa5a424b5314ec71fc805e07b0a9f9ebd7fa779c74b2b152",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32962"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cd1394065581",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-634825": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7570f01b50c0",
	                        "pause-634825"
	                    ],
	                    "NetworkID": "d336f126a2be1940786fbc43fe7ddf30c0d968fec797cc397b6c954713823928",
	                    "EndpointID": "c211db29f9840c3b753c3e7babab5a783d4d6285613b88b99c62c64e4e85054a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-634825 -n pause-634825
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-634825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-634825 logs -n 25: (2.166082283s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:36 UTC |                     |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20         |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:36 UTC | 11 Aug 23 23:37 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --no-kubernetes                   |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-886838 sudo       | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	| start   | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:37 UTC |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-886838 sudo       | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC |                     |
	|         | systemctl is-active --quiet       |                           |         |         |                     |                     |
	|         | service kubelet                   |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-886838            | NoKubernetes-886838       | jenkins | v1.31.1 | 11 Aug 23 23:37 UTC | 11 Aug 23 23:38 UTC |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:38 UTC | 11 Aug 23 23:39 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-550468         | missing-upgrade-550468    | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:39 UTC |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:43 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-550468         | missing-upgrade-550468    | jenkins | v1.31.1 | 11 Aug 23 23:39 UTC | 11 Aug 23 23:39 UTC |
	| start   | -p stopped-upgrade-773979         | stopped-upgrade-773979    | jenkins | v1.31.1 | 11 Aug 23 23:41 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-773979         | stopped-upgrade-773979    | jenkins | v1.31.1 | 11 Aug 23 23:41 UTC | 11 Aug 23 23:41 UTC |
	| start   | -p running-upgrade-341136         | running-upgrade-341136    | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-341136         | running-upgrade-341136    | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC | 11 Aug 23 23:42 UTC |
	| start   | -p pause-634825 --memory=2048     | pause-634825              | jenkins | v1.31.1 | 11 Aug 23 23:42 UTC | 11 Aug 23 23:43 UTC |
	|         | --install-addons=false            |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker        |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p pause-634825                   | pause-634825              | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC | 11 Aug 23 23:44 UTC |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC |                     |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |         |         |                     |                     |
	|         | --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:43 UTC | 11 Aug 23 23:44 UTC |
	|         | --memory=2200                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-788862      | kubernetes-upgrade-788862 | jenkins | v1.31.1 | 11 Aug 23 23:44 UTC | 11 Aug 23 23:44 UTC |
	| start   | -p force-systemd-flag-847326      | force-systemd-flag-847326 | jenkins | v1.31.1 | 11 Aug 23 23:44 UTC |                     |
	|         | --memory=2048 --force-systemd     |                           |         |         |                     |                     |
	|         | --alsologtostderr                 |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker              |                           |         |         |                     |                     |
	|         | --container-runtime=crio          |                           |         |         |                     |                     |
	|---------|-----------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:44:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:44:25.957152  141718 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:44:25.957317  141718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:25.957325  141718 out.go:309] Setting ErrFile to fd 2...
	I0811 23:44:25.957331  141718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:25.957601  141718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:44:25.958009  141718 out.go:303] Setting JSON to false
	I0811 23:44:25.959045  141718 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5214,"bootTime":1691792252,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:44:25.959119  141718 start.go:138] virtualization:  
	I0811 23:44:25.962521  141718 out.go:177] * [force-systemd-flag-847326] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:44:25.964424  141718 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:44:25.964505  141718 notify.go:220] Checking for updates...
	I0811 23:44:25.969742  141718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:44:25.971475  141718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:25.973132  141718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:44:25.974776  141718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:44:25.976263  141718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:44:25.982366  141718 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:25.982557  141718 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:44:26.027228  141718 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:44:26.027324  141718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:44:26.160610  141718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:44:26.150287761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:44:26.160718  141718 docker.go:294] overlay module found
	I0811 23:44:26.163379  141718 out.go:177] * Using the docker driver based on user configuration
	I0811 23:44:26.165286  141718 start.go:298] selected driver: docker
	I0811 23:44:26.165303  141718 start.go:901] validating driver "docker" against <nil>
	I0811 23:44:26.165316  141718 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:44:26.165933  141718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:44:26.280345  141718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:44:26.269460259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:44:26.280500  141718 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:44:26.280711  141718 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 23:44:26.282639  141718 out.go:177] * Using Docker driver with root privileges
	I0811 23:44:26.284699  141718 cni.go:84] Creating CNI manager for ""
	I0811 23:44:26.284720  141718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:44:26.284729  141718 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:44:26.284756  141718 start_flags.go:319] config:
	{Name:force-systemd-flag-847326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-847326 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:44:26.287147  141718 out.go:177] * Starting control plane node force-systemd-flag-847326 in cluster force-systemd-flag-847326
	I0811 23:44:26.289024  141718 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:44:26.290916  141718 out.go:177] * Pulling base image ...
	I0811 23:44:26.292628  141718 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:44:26.292681  141718 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:44:26.292692  141718 cache.go:57] Caching tarball of preloaded images
	I0811 23:44:26.292708  141718 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:44:26.292784  141718 preload.go:174] Found /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0811 23:44:26.292794  141718 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0811 23:44:26.292907  141718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json ...
	I0811 23:44:26.292925  141718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json: {Name:mk0d5fddc1d5c8d8c581afb5bc750de470eaa853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:26.315040  141718 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0811 23:44:26.315061  141718 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0811 23:44:26.315084  141718 cache.go:195] Successfully downloaded all kic artifacts
	I0811 23:44:26.315214  141718 start.go:365] acquiring machines lock for force-systemd-flag-847326: {Name:mka36b9178bdba7c081cc28eabba2f8f60b312c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0811 23:44:26.315411  141718 start.go:369] acquired machines lock for "force-systemd-flag-847326" in 174.056µs
	I0811 23:44:26.315476  141718 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-847326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:force-systemd-flag-847326 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:44:26.315580  141718 start.go:125] createHost starting for "" (driver="docker")
	I0811 23:44:24.954158  137978 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0811 23:44:24.961993  137978 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0811 23:44:24.962017  137978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0811 23:44:25.015407  137978 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0811 23:44:26.103752  137978 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.088308985s)
	I0811 23:44:26.103781  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:26.128275  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:26.128342  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:26.128369  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0811 23:44:26.128392  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:26.128436  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0811 23:44:26.128460  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0811 23:44:26.128481  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:26.128539  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0811 23:44:26.128656  137978 system_pods.go:74] duration metric: took 24.868164ms to wait for pod list to return data ...
	I0811 23:44:26.128686  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:26.132617  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:26.132645  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:26.132657  137978 node_conditions.go:105] duration metric: took 3.955177ms to run NodePressure ...
	I0811 23:44:26.132674  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0811 23:44:26.389037  137978 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396842  137978 kubeadm.go:787] kubelet initialised
	I0811 23:44:26.396864  137978 kubeadm.go:788] duration metric: took 7.807298ms waiting for restarted kubelet to initialise ...
	I0811 23:44:26.396873  137978 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:26.403326  137978 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413614  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:26.413642  137978 pod_ready.go:81] duration metric: took 10.286092ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:26.413656  137978 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:28.435506  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:26.318152  141718 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0811 23:44:26.318553  141718 start.go:159] libmachine.API.Create for "force-systemd-flag-847326" (driver="docker")
	I0811 23:44:26.318574  141718 client.go:168] LocalClient.Create starting
	I0811 23:44:26.318729  141718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem
	I0811 23:44:26.318816  141718 main.go:141] libmachine: Decoding PEM data...
	I0811 23:44:26.318835  141718 main.go:141] libmachine: Parsing certificate...
	I0811 23:44:26.318993  141718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem
	I0811 23:44:26.319029  141718 main.go:141] libmachine: Decoding PEM data...
	I0811 23:44:26.319041  141718 main.go:141] libmachine: Parsing certificate...
	I0811 23:44:26.319865  141718 cli_runner.go:164] Run: docker network inspect force-systemd-flag-847326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0811 23:44:26.343555  141718 cli_runner.go:211] docker network inspect force-systemd-flag-847326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0811 23:44:26.343641  141718 network_create.go:281] running [docker network inspect force-systemd-flag-847326] to gather additional debugging logs...
	I0811 23:44:26.343658  141718 cli_runner.go:164] Run: docker network inspect force-systemd-flag-847326
	W0811 23:44:26.368853  141718 cli_runner.go:211] docker network inspect force-systemd-flag-847326 returned with exit code 1
	I0811 23:44:26.368886  141718 network_create.go:284] error running [docker network inspect force-systemd-flag-847326]: docker network inspect force-systemd-flag-847326: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-847326 not found
	I0811 23:44:26.368899  141718 network_create.go:286] output of [docker network inspect force-systemd-flag-847326]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-847326 not found
	
	** /stderr **
	I0811 23:44:26.368961  141718 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0811 23:44:26.393814  141718 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb015cdafab9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3c:25:af:38} reservation:<nil>}
	I0811 23:44:26.394205  141718 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c2f4372f433a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:1d:72:42:dd} reservation:<nil>}
	I0811 23:44:26.394708  141718 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000b7f7f0}
	I0811 23:44:26.394726  141718 network_create.go:123] attempt to create docker network force-systemd-flag-847326 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0811 23:44:26.394780  141718 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-847326 force-systemd-flag-847326
	I0811 23:44:26.497522  141718 network_create.go:107] docker network force-systemd-flag-847326 192.168.67.0/24 created
	I0811 23:44:26.497556  141718 kic.go:117] calculated static IP "192.168.67.2" for the "force-systemd-flag-847326" container
	I0811 23:44:26.497631  141718 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0811 23:44:26.515311  141718 cli_runner.go:164] Run: docker volume create force-systemd-flag-847326 --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --label created_by.minikube.sigs.k8s.io=true
	I0811 23:44:26.534514  141718 oci.go:103] Successfully created a docker volume force-systemd-flag-847326
	I0811 23:44:26.534601  141718 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-847326-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --entrypoint /usr/bin/test -v force-systemd-flag-847326:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0811 23:44:27.145393  141718 oci.go:107] Successfully prepared a docker volume force-systemd-flag-847326
	I0811 23:44:27.145440  141718 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:44:27.145461  141718 kic.go:190] Starting extracting preloaded images to volume ...
	I0811 23:44:27.145550  141718 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-847326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0811 23:44:30.965622  137978 pod_ready.go:102] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"False"
	I0811 23:44:31.434478  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.434505  137978 pod_ready.go:81] duration metric: took 5.020841635s waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.434520  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440825  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.440851  137978 pod_ready.go:81] duration metric: took 6.323103ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.440863  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447339  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.447372  137978 pod_ready.go:81] duration metric: took 6.494469ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.447384  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455388  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.455407  137978 pod_ready.go:81] duration metric: took 8.015579ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.455417  137978 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707788  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:31.707813  137978 pod_ready.go:81] duration metric: took 252.388977ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:31.707823  137978 pod_ready.go:38] duration metric: took 5.310939544s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:31.707843  137978 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0811 23:44:31.718825  137978 ops.go:34] apiserver oom_adj: -16
	I0811 23:44:31.718842  137978 kubeadm.go:640] restartCluster took 30.943856137s
	I0811 23:44:31.718851  137978 kubeadm.go:406] StartCluster complete in 31.02393291s
	I0811 23:44:31.718865  137978 settings.go:142] acquiring lock: {Name:mkcdb2c6d2ae1cdcfca5cf5a992c9589250c7de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.718922  137978 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:31.719576  137978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/kubeconfig: {Name:mk6629381ac7815dbe689239b7a7612d237ee7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:44:31.720231  137978 kapi.go:59] client config for pause-634825: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.crt", KeyFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/profiles/pause-634825/client.key", CAFile:"/home/jenkins/minikube-integration/17044-2333/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16eb290), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0811 23:44:31.720696  137978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0811 23:44:31.720950  137978 config.go:182] Loaded profile config "pause-634825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:31.720979  137978 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0811 23:44:31.725590  137978 out.go:177] * Enabled addons: 
	I0811 23:44:31.723434  137978 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-634825" context rescaled to 1 replicas
	I0811 23:44:31.725697  137978 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0811 23:44:31.727670  137978 out.go:177] * Verifying Kubernetes components...
	I0811 23:44:31.729508  137978 addons.go:502] enable addons completed in 8.522741ms: enabled=[]
	I0811 23:44:31.731692  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:31.858559  137978 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0811 23:44:31.858611  137978 node_ready.go:35] waiting up to 6m0s for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909894  137978 node_ready.go:49] node "pause-634825" has status "Ready":"True"
	I0811 23:44:31.909921  137978 node_ready.go:38] duration metric: took 51.295166ms waiting for node "pause-634825" to be "Ready" ...
	I0811 23:44:31.909931  137978 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:32.111204  137978 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511502  137978 pod_ready.go:92] pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.511527  137978 pod_ready.go:81] duration metric: took 400.293104ms waiting for pod "coredns-5d78c9869d-7zz5s" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.511539  137978 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907947  137978 pod_ready.go:92] pod "etcd-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:32.907973  137978 pod_ready.go:81] duration metric: took 396.426665ms waiting for pod "etcd-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:32.907988  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307880  137978 pod_ready.go:92] pod "kube-apiserver-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.307910  137978 pod_ready.go:81] duration metric: took 399.913525ms waiting for pod "kube-apiserver-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.307934  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708207  137978 pod_ready.go:92] pod "kube-controller-manager-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:33.708231  137978 pod_ready.go:81] duration metric: took 400.282578ms waiting for pod "kube-controller-manager-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:33.708244  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107687  137978 pod_ready.go:92] pod "kube-proxy-sptbv" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.107707  137978 pod_ready.go:81] duration metric: took 399.456201ms waiting for pod "kube-proxy-sptbv" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.107718  137978 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508256  137978 pod_ready.go:92] pod "kube-scheduler-pause-634825" in "kube-system" namespace has status "Ready":"True"
	I0811 23:44:34.508282  137978 pod_ready.go:81] duration metric: took 400.556034ms waiting for pod "kube-scheduler-pause-634825" in "kube-system" namespace to be "Ready" ...
	I0811 23:44:34.508292  137978 pod_ready.go:38] duration metric: took 2.59834828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0811 23:44:34.508306  137978 api_server.go:52] waiting for apiserver process to appear ...
	I0811 23:44:34.508358  137978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:44:34.524570  137978 api_server.go:72] duration metric: took 2.798782978s to wait for apiserver process to appear ...
	I0811 23:44:34.524597  137978 api_server.go:88] waiting for apiserver healthz status ...
	I0811 23:44:34.524613  137978 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0811 23:44:34.536485  137978 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0811 23:44:34.537800  137978 api_server.go:141] control plane version: v1.27.4
	I0811 23:44:34.537824  137978 api_server.go:131] duration metric: took 13.22057ms to wait for apiserver health ...
	I0811 23:44:34.537833  137978 system_pods.go:43] waiting for kube-system pods to appear ...
	I0811 23:44:34.712104  137978 system_pods.go:59] 7 kube-system pods found
	I0811 23:44:34.712134  137978 system_pods.go:61] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:34.712141  137978 system_pods.go:61] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:34.712146  137978 system_pods.go:61] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:34.712160  137978 system_pods.go:61] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:34.712167  137978 system_pods.go:61] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:34.712173  137978 system_pods.go:61] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:34.712178  137978 system_pods.go:61] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:34.712187  137978 system_pods.go:74] duration metric: took 174.348754ms to wait for pod list to return data ...
	I0811 23:44:34.712196  137978 default_sa.go:34] waiting for default service account to be created ...
	I0811 23:44:34.907438  137978 default_sa.go:45] found service account: "default"
	I0811 23:44:34.907463  137978 default_sa.go:55] duration metric: took 195.261651ms for default service account to be created ...
	I0811 23:44:34.907474  137978 system_pods.go:116] waiting for k8s-apps to be running ...
	I0811 23:44:35.114088  137978 system_pods.go:86] 7 kube-system pods found
	I0811 23:44:35.114179  137978 system_pods.go:89] "coredns-5d78c9869d-7zz5s" [e3005957-3506-4ea1-a12a-1961a28c67d4] Running
	I0811 23:44:35.114201  137978 system_pods.go:89] "etcd-pause-634825" [e24d09aa-5f75-4c4a-ab28-c44978cacf22] Running
	I0811 23:44:35.114245  137978 system_pods.go:89] "kindnet-q6qpq" [cf34f80e-7018-4003-b7c5-94c7c8ea41da] Running
	I0811 23:44:35.114270  137978 system_pods.go:89] "kube-apiserver-pause-634825" [14d6efed-a5f8-4440-b368-8b3ffef2412b] Running
	I0811 23:44:35.114291  137978 system_pods.go:89] "kube-controller-manager-pause-634825" [977c2210-4efb-43e0-9a65-ee526f24ca57] Running
	I0811 23:44:35.114328  137978 system_pods.go:89] "kube-proxy-sptbv" [f9473cb2-7b87-40e8-891a-aa651f27406d] Running
	I0811 23:44:35.114357  137978 system_pods.go:89] "kube-scheduler-pause-634825" [6d805427-0c85-416f-93b7-a27be1ae2294] Running
	I0811 23:44:35.114381  137978 system_pods.go:126] duration metric: took 206.901804ms to wait for k8s-apps to be running ...
	I0811 23:44:35.114468  137978 system_svc.go:44] waiting for kubelet service to be running ....
	I0811 23:44:35.114564  137978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:44:35.131486  137978 system_svc.go:56] duration metric: took 17.008961ms WaitForService to wait for kubelet.
	I0811 23:44:35.131510  137978 kubeadm.go:581] duration metric: took 3.405728386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0811 23:44:35.131531  137978 node_conditions.go:102] verifying NodePressure condition ...
	I0811 23:44:35.310376  137978 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0811 23:44:35.310412  137978 node_conditions.go:123] node cpu capacity is 2
	I0811 23:44:35.310423  137978 node_conditions.go:105] duration metric: took 178.887099ms to run NodePressure ...
	I0811 23:44:35.310434  137978 start.go:228] waiting for startup goroutines ...
	I0811 23:44:35.310449  137978 start.go:233] waiting for cluster config update ...
	I0811 23:44:35.310461  137978 start.go:242] writing updated cluster config ...
	I0811 23:44:35.310854  137978 ssh_runner.go:195] Run: rm -f paused
	I0811 23:44:35.393749  137978 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0811 23:44:35.396926  137978 out.go:177] * Done! kubectl is now configured to use "pause-634825" cluster and "default" namespace by default
	I0811 23:44:31.355632  141718 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-847326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.210044795s)
	I0811 23:44:31.355665  141718 kic.go:199] duration metric: took 4.210202 seconds to extract preloaded images to volume
	W0811 23:44:31.355806  141718 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0811 23:44:31.355922  141718 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0811 23:44:31.461054  141718 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-847326 --name force-systemd-flag-847326 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-847326 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-847326 --network force-systemd-flag-847326 --ip 192.168.67.2 --volume force-systemd-flag-847326:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0811 23:44:31.879797  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Running}}
	I0811 23:44:31.908554  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:31.944500  141718 cli_runner.go:164] Run: docker exec force-systemd-flag-847326 stat /var/lib/dpkg/alternatives/iptables
	I0811 23:44:32.020865  141718 oci.go:144] the created container "force-systemd-flag-847326" has a running status.
	I0811 23:44:32.020898  141718 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa...
	I0811 23:44:32.764996  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0811 23:44:32.765044  141718 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0811 23:44:32.792160  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:32.818709  141718 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0811 23:44:32.818730  141718 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-847326 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0811 23:44:32.929929  141718 cli_runner.go:164] Run: docker container inspect force-systemd-flag-847326 --format={{.State.Status}}
	I0811 23:44:32.955560  141718 machine.go:88] provisioning docker machine ...
	I0811 23:44:32.955587  141718 ubuntu.go:169] provisioning hostname "force-systemd-flag-847326"
	I0811 23:44:32.955651  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:32.988179  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:32.988651  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:32.988671  141718 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-847326 && echo "force-systemd-flag-847326" | sudo tee /etc/hostname
	I0811 23:44:33.172085  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-847326
	
	I0811 23:44:33.172173  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:33.192156  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:33.192583  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:33.192603  141718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-847326' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-847326/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-847326' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0811 23:44:33.350821  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0811 23:44:33.350843  141718 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17044-2333/.minikube CaCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17044-2333/.minikube}
	I0811 23:44:33.350863  141718 ubuntu.go:177] setting up certificates
	I0811 23:44:33.350871  141718 provision.go:83] configureAuth start
	I0811 23:44:33.350939  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:33.371037  141718 provision.go:138] copyHostCerts
	I0811 23:44:33.371076  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:44:33.371108  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem, removing ...
	I0811 23:44:33.371115  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem
	I0811 23:44:33.371190  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/cert.pem (1123 bytes)
	I0811 23:44:33.371267  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:44:33.371285  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem, removing ...
	I0811 23:44:33.371289  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem
	I0811 23:44:33.371314  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/key.pem (1675 bytes)
	I0811 23:44:33.371353  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:44:33.371367  141718 exec_runner.go:144] found /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem, removing ...
	I0811 23:44:33.371371  141718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem
	I0811 23:44:33.371394  141718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17044-2333/.minikube/ca.pem (1082 bytes)
	I0811 23:44:33.371441  141718 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-847326 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-847326]
	I0811 23:44:34.101560  141718 provision.go:172] copyRemoteCerts
	I0811 23:44:34.101630  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0811 23:44:34.101670  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.122067  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.231914  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0811 23:44:34.231986  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0811 23:44:34.262109  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0811 23:44:34.262215  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0811 23:44:34.292090  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0811 23:44:34.292154  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0811 23:44:34.322747  141718 provision.go:86] duration metric: configureAuth took 971.859893ms
	I0811 23:44:34.322773  141718 ubuntu.go:193] setting minikube options for container-runtime
	I0811 23:44:34.323007  141718 config.go:182] Loaded profile config "force-systemd-flag-847326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:34.323127  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.343035  141718 main.go:141] libmachine: Using SSH client type: native
	I0811 23:44:34.343474  141718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f940] 0x3a22d0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0811 23:44:34.343499  141718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0811 23:44:34.625493  141718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0811 23:44:34.625514  141718 machine.go:91] provisioned docker machine in 1.669936921s
	I0811 23:44:34.625524  141718 client.go:171] LocalClient.Create took 8.306945348s
	I0811 23:44:34.625536  141718 start.go:167] duration metric: libmachine.API.Create for "force-systemd-flag-847326" took 8.306985274s
	I0811 23:44:34.625544  141718 start.go:300] post-start starting for "force-systemd-flag-847326" (driver="docker")
	I0811 23:44:34.625553  141718 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0811 23:44:34.625637  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0811 23:44:34.625684  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.651528  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.758051  141718 ssh_runner.go:195] Run: cat /etc/os-release
	I0811 23:44:34.763171  141718 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0811 23:44:34.763204  141718 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0811 23:44:34.763215  141718 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0811 23:44:34.763222  141718 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0811 23:44:34.763234  141718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/addons for local assets ...
	I0811 23:44:34.763294  141718 filesync.go:126] Scanning /home/jenkins/minikube-integration/17044-2333/.minikube/files for local assets ...
	I0811 23:44:34.763389  141718 filesync.go:149] local asset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> 76342.pem in /etc/ssl/certs
	I0811 23:44:34.763397  141718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem -> /etc/ssl/certs/76342.pem
	I0811 23:44:34.763496  141718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0811 23:44:34.774726  141718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/ssl/certs/76342.pem --> /etc/ssl/certs/76342.pem (1708 bytes)
	I0811 23:44:34.806021  141718 start.go:303] post-start completed in 180.463454ms
	I0811 23:44:34.806394  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:34.828916  141718 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/force-systemd-flag-847326/config.json ...
	I0811 23:44:34.829270  141718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:44:34.829322  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.846501  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:34.948000  141718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0811 23:44:34.953892  141718 start.go:128] duration metric: createHost completed in 8.638296538s
	I0811 23:44:34.953915  141718 start.go:83] releasing machines lock for "force-systemd-flag-847326", held for 8.638490484s
	I0811 23:44:34.953984  141718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-847326
	I0811 23:44:34.974966  141718 ssh_runner.go:195] Run: cat /version.json
	I0811 23:44:34.975029  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:34.975277  141718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0811 23:44:34.975329  141718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-847326
	I0811 23:44:35.003638  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:35.015433  141718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/force-systemd-flag-847326/id_rsa Username:docker}
	I0811 23:44:35.106424  141718 ssh_runner.go:195] Run: systemctl --version
	I0811 23:44:35.255782  141718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0811 23:44:35.414078  141718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0811 23:44:35.433313  141718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:44:35.510700  141718 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0811 23:44:35.510807  141718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0811 23:44:35.611290  141718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0811 23:44:35.611314  141718 start.go:466] detecting cgroup driver to use...
	I0811 23:44:35.611327  141718 start.go:470] using "systemd" cgroup driver as enforced via flags
	I0811 23:44:35.611380  141718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0811 23:44:35.655732  141718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0811 23:44:35.697501  141718 docker.go:196] disabling cri-docker service (if available) ...
	I0811 23:44:35.697569  141718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0811 23:44:35.724784  141718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0811 23:44:35.761653  141718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0811 23:44:35.941053  141718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0811 23:44:36.117419  141718 docker.go:212] disabling docker service ...
	I0811 23:44:36.117493  141718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0811 23:44:36.170887  141718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0811 23:44:36.193901  141718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0811 23:44:36.336071  141718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0811 23:44:36.474607  141718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0811 23:44:36.490538  141718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0811 23:44:36.514081  141718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0811 23:44:36.514146  141718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:44:36.527569  141718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0811 23:44:36.527635  141718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:44:36.545940  141718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:44:36.560317  141718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0811 23:44:36.573582  141718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0811 23:44:36.587813  141718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0811 23:44:36.598638  141718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0811 23:44:36.611152  141718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0811 23:44:36.732677  141718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0811 23:44:36.885763  141718 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0811 23:44:36.885833  141718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0811 23:44:36.893188  141718 start.go:534] Will wait 60s for crictl version
	I0811 23:44:36.893250  141718 ssh_runner.go:195] Run: which crictl
	I0811 23:44:36.898681  141718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0811 23:44:36.956417  141718 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0811 23:44:36.956503  141718 ssh_runner.go:195] Run: crio --version
	I0811 23:44:37.019716  141718 ssh_runner.go:195] Run: crio --version
	I0811 23:44:37.084094  141718 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	
	* 
	* ==> CRI-O <==
	* Aug 11 23:44:23 pause-634825 crio[2601]: time="2023-08-11 23:44:23.920556101Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc9c3f65a74c58cd85cfbc049cc879f0e291da2794b867949238551ff73ac6a0/merged/etc/group: no such file or directory"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.040507146Z" level=info msg="Created container b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732: kube-system/coredns-5d78c9869d-7zz5s/coredns" id=88201f77-0fdc-4085-867b-d0e7039028c0 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.041248205Z" level=info msg="Starting container: b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732" id=2b3e26c4-96b4-4807-9110-c9e138db5ee2 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.091959923Z" level=info msg="Created container fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b: kube-system/kindnet-q6qpq/kindnet-cni" id=30574617-4ab4-46cb-8ef3-1ddd8dc48bd2 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.093664518Z" level=info msg="Starting container: fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b" id=b42d14a3-c8c9-4555-9973-a2a55dec4600 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.104806566Z" level=info msg="Started container" PID=3477 containerID=b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732 description=kube-system/coredns-5d78c9869d-7zz5s/coredns id=2b3e26c4-96b4-4807-9110-c9e138db5ee2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a48e172e7e9f0377f2066cba42f691ab5a47b2a04a9c0ae2fab01ccd31ddbf5b
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.122193489Z" level=info msg="Started container" PID=3471 containerID=fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b description=kube-system/kindnet-q6qpq/kindnet-cni id=b42d14a3-c8c9-4555-9973-a2a55dec4600 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e7075607dcbe29b71e83494f2d6e7bf7aeb9ae7400c5d010aa551dac55bca68
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.276815661Z" level=info msg="Created container c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284: kube-system/kube-proxy-sptbv/kube-proxy" id=9324ac04-7570-45b0-983d-da200f0dfb03 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.277580909Z" level=info msg="Starting container: c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284" id=f9d8c716-53b7-465e-99bd-f0ce55f2bd47 name=/runtime.v1.RuntimeService/StartContainer
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.297686554Z" level=info msg="Started container" PID=3462 containerID=c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284 description=kube-system/kube-proxy-sptbv/kube-proxy id=f9d8c716-53b7-465e-99bd-f0ce55f2bd47 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef63d5ca5c5deeff0492472fbca6f69ccde92debf681996838d5fda80989c9d7
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.553592117Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.585432127Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.585469542Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.633420200Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659088541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659123438Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.659142179Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677721537Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677753176Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.677769242Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698199246Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698240797Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.698260473Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.710444072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 11 23:44:24 pause-634825 crio[2601]: time="2023-08-11 23:44:24.710500515Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b33696f9f00d4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   16 seconds ago      Running             coredns                   2                   a48e172e7e9f0       coredns-5d78c9869d-7zz5s
	fd096ea5b2598       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   16 seconds ago      Running             kindnet-cni               3                   7e7075607dcbe       kindnet-q6qpq
	c7c4e7c90ec34       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317   16 seconds ago      Running             kube-proxy                3                   ef63d5ca5c5de       kube-proxy-sptbv
	a8a7f2ce1a045       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388   25 seconds ago      Running             kube-apiserver            2                   1939f6bc8aca7       kube-apiserver-pause-634825
	97fa57d0ed11c       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   25 seconds ago      Running             etcd                      2                   67766735af340       etcd-pause-634825
	bcc7ddc18d12e       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2   25 seconds ago      Running             kube-controller-manager   2                   2027f21f45cb4       kube-controller-manager-pause-634825
	7447e17b7ff99       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085   25 seconds ago      Running             kube-scheduler            2                   b8ff3a6415a63       kube-scheduler-pause-634825
	4425001a8d2b4       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317   30 seconds ago      Exited              kube-proxy                2                   ef63d5ca5c5de       kube-proxy-sptbv
	792bd634ed394       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79   30 seconds ago      Exited              kindnet-cni               2                   7e7075607dcbe       kindnet-q6qpq
	d01488efe1422       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2   41 seconds ago      Exited              kube-controller-manager   1                   2027f21f45cb4       kube-controller-manager-pause-634825
	d529e1a8dc40a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   42 seconds ago      Exited              coredns                   1                   a48e172e7e9f0       coredns-5d78c9869d-7zz5s
	24d42e36b289e       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388   42 seconds ago      Exited              kube-apiserver            1                   1939f6bc8aca7       kube-apiserver-pause-634825
	6024f9aef0a28       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737   42 seconds ago      Exited              etcd                      1                   67766735af340       etcd-pause-634825
	9ecaf1f2abdda       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085   42 seconds ago      Exited              kube-scheduler            1                   b8ff3a6415a63       kube-scheduler-pause-634825
	
	* 
	* ==> coredns [b33696f9f00d4a0f1ca440313049b11491bcca9e341bbc3065dbd76850ec5732] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43932 - 41693 "HINFO IN 445594622542569235.3547650646003049932. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015790826s
	
	* 
	* ==> coredns [d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               pause-634825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-634825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0bff008270ec17d4e0c2c90a14e18ac31a0e01f5
	                    minikube.k8s.io/name=pause-634825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_11T23_43_03_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Aug 2023 23:42:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-634825
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Aug 2023 23:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:42:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Aug 2023 23:44:23 +0000   Fri, 11 Aug 2023 23:43:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-634825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 106dbb0f244c44e2b6a5835bc1103596
	  System UUID:                c11e9341-4ad9-484b-a824-aefd9ce8ab2f
	  Boot ID:                    9640b2fc-8f02-48dc-9a98-7457f33cfb40
	  Kernel Version:             5.15.0-1040-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-7zz5s                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     85s
	  kube-system                 etcd-pause-634825                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kindnet-q6qpq                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      85s
	  kube-system                 kube-apiserver-pause-634825             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-pause-634825    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-sptbv                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-pause-634825             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x8 over 108s)  kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           86s                  node-controller  Node pause-634825 event: Registered Node pause-634825 in Controller
	  Normal  NodeReady                55s                  kubelet          Node pause-634825 status is now: NodeReady
	  Normal  Starting                 27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)    kubelet          Node pause-634825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)    kubelet          Node pause-634825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x8 over 27s)    kubelet          Node pause-634825 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                   node-controller  Node pause-634825 event: Registered Node pause-634825 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000754] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001054] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +0.003010] FS-Cache: Duplicate cookie detected
	[  +0.000685] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000004a8382c
	[  +0.001063] FS-Cache: O-key=[8] '805b3b0000000000'
	[  +0.000759] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=0000000045141c8c
	[  +0.001051] FS-Cache: N-key=[8] '805b3b0000000000'
	[  +2.763262] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=000000003c07f4d4
	[  +0.001127] FS-Cache: O-key=[8] '7f5b3b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=000000006a6921aa
	[  +0.001095] FS-Cache: N-key=[8] '7f5b3b0000000000'
	[  +0.384460] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=0000000087cf7eaf{9p.inode} n=0000000084bb64d5
	[  +0.001049] FS-Cache: O-key=[8] '8a5b3b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000087cf7eaf{9p.inode} n=00000000d7da585e
	[  +0.001049] FS-Cache: N-key=[8] '8a5b3b0000000000'
	[Aug11 23:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6024f9aef0a281043827e7556097bdddf70695ffbdd8e11a3a5f3ca6baca26f9] <==
	* {"level":"warn","ts":"2023-08-11T23:43:58.997Z","caller":"flags/flag.go:93","msg":"unrecognized environment variable","environment-variable":"ETCD_UNSUPPORTED_ARCH=arm64"}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.76.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.76.2:2380","--initial-cluster=pause-634825=https://192.168.76.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.76.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.76.2:2380","--name=pause-634825","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/
var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-11T23:43:59.011Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-08-11T23:43:59.012Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"arm64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-634825","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token"
:"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2023-08-11T23:43:59.017Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"5.079287ms"}
	{"level":"info","ts":"2023-08-11T23:43:59.082Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-08-11T23:43:59.090Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":448}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-08-11T23:43:59.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 448, applied: 0, lastindex: 448, lastterm: 2]"}
	{"level":"warn","ts":"2023-08-11T23:43:59.108Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	
	* 
	* ==> etcd [97fa57d0ed11c36b87bbcbb8fd6c82944f930e66220cdbdcafe3af34dfe79fb0] <==
	* {"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:44:15.222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-11T23:44:15.226Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-11T23:44:15.227Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-11T23:44:15.227Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-11T23:44:16.406Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-634825 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-11T23:44:16.407Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:44:16.411Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-08-11T23:44:16.417Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-11T23:44:16.418Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-11T23:44:16.421Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-11T23:44:16.421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:44:41 up  1:27,  0 users,  load average: 4.06, 3.03, 2.16
	Linux pause-634825 5.15.0-1040-aws #45~20.04.1-Ubuntu SMP Tue Jul 11 19:11:12 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96] <==
	* I0811 23:44:10.532723       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0811 23:44:10.532789       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0811 23:44:10.532908       1 main.go:116] setting mtu 1500 for CNI 
	I0811 23:44:10.532922       1 main.go:146] kindnetd IP family: "ipv4"
	I0811 23:44:10.532932       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0811 23:44:10.741771       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0811 23:44:10.829384       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kindnet [fd096ea5b2598f6d5669d906e60351567b0f7e31d776bb25d0d915ee7d8ff33b] <==
	* I0811 23:44:24.225911       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0811 23:44:24.228237       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0811 23:44:24.228475       1 main.go:116] setting mtu 1500 for CNI 
	I0811 23:44:24.229279       1 main.go:146] kindnetd IP family: "ipv4"
	I0811 23:44:24.229333       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0811 23:44:24.553226       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0811 23:44:24.553373       1 main.go:227] handling current node
	I0811 23:44:34.659587       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0811 23:44:34.659972       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [24d42e36b289e4acc22437a5331118aff3e510567fbaa83f6407823d129ad5d7] <==
	* 
	* 
	* ==> kube-apiserver [a8a7f2ce1a04500c1ff5b075c85a3f290ec2a118dbea68eacacbea277af8450d] <==
	* I0811 23:44:22.947829       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0811 23:44:22.947902       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0811 23:44:22.954361       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0811 23:44:22.954381       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	E0811 23:44:23.271701       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0811 23:44:23.297827       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0811 23:44:23.337794       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0811 23:44:23.343825       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0811 23:44:23.344660       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0811 23:44:23.346669       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0811 23:44:23.346782       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0811 23:44:23.347545       1 shared_informer.go:318] Caches are synced for configmaps
	I0811 23:44:23.355287       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0811 23:44:23.355355       1 aggregator.go:152] initial CRD sync complete...
	I0811 23:44:23.355400       1 autoregister_controller.go:141] Starting autoregister controller
	I0811 23:44:23.355413       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0811 23:44:23.355422       1 cache.go:39] Caches are synced for autoregister controller
	I0811 23:44:23.355480       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0811 23:44:23.422329       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0811 23:44:24.055180       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0811 23:44:26.085236       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0811 23:44:26.276606       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0811 23:44:26.294868       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0811 23:44:26.366394       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0811 23:44:26.375063       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [bcc7ddc18d12e3006b96f9fe103e6c1da53af66f951b985ba59f60349a7b8163] <==
	* I0811 23:44:35.730264       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:44:35.730279       1 shared_informer.go:318] Caches are synced for deployment
	I0811 23:44:35.742697       1 shared_informer.go:318] Caches are synced for GC
	I0811 23:44:35.744971       1 shared_informer.go:318] Caches are synced for resource quota
	I0811 23:44:35.754289       1 shared_informer.go:318] Caches are synced for stateful set
	I0811 23:44:35.754431       1 shared_informer.go:318] Caches are synced for daemon sets
	I0811 23:44:35.763476       1 shared_informer.go:318] Caches are synced for ephemeral
	I0811 23:44:35.773215       1 shared_informer.go:318] Caches are synced for PVC protection
	I0811 23:44:35.774690       1 shared_informer.go:318] Caches are synced for taint
	I0811 23:44:35.774774       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0811 23:44:35.774851       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-634825"
	I0811 23:44:35.774890       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0811 23:44:35.774903       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0811 23:44:35.774917       1 taint_manager.go:211] "Sending events to api server"
	I0811 23:44:35.775457       1 event.go:307] "Event occurred" object="pause-634825" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-634825 event: Registered Node pause-634825 in Controller"
	I0811 23:44:35.775488       1 shared_informer.go:318] Caches are synced for endpoint
	I0811 23:44:35.775519       1 shared_informer.go:318] Caches are synced for HPA
	I0811 23:44:35.775546       1 shared_informer.go:318] Caches are synced for persistent volume
	I0811 23:44:35.775647       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0811 23:44:35.775901       1 shared_informer.go:318] Caches are synced for attach detach
	I0811 23:44:35.782214       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0811 23:44:35.784572       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0811 23:44:36.099928       1 shared_informer.go:318] Caches are synced for garbage collector
	I0811 23:44:36.100060       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0811 23:44:36.100481       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [d01488efe142281153b08384869059e10873d2fd7f4f17d722b27b398964923e] <==
	* 
	* 
	* ==> kube-proxy [4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd] <==
	* E0811 23:44:10.522789       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-634825": dial tcp 192.168.76.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [c7c4e7c90ec34f302c92216e3692ff9844e7c07ed749ac9c6f7b8b52e12e1284] <==
	* I0811 23:44:24.547550       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0811 23:44:24.547678       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0811 23:44:24.547701       1 server_others.go:554] "Using iptables proxy"
	I0811 23:44:24.739991       1 server_others.go:192] "Using iptables Proxier"
	I0811 23:44:24.740028       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0811 23:44:24.740037       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0811 23:44:24.740052       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0811 23:44:24.740142       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0811 23:44:24.740708       1 server.go:658] "Version info" version="v1.27.4"
	I0811 23:44:24.740797       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:44:24.742192       1 config.go:188] "Starting service config controller"
	I0811 23:44:24.743804       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0811 23:44:24.742368       1 config.go:97] "Starting endpoint slice config controller"
	I0811 23:44:24.743929       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0811 23:44:24.743099       1 config.go:315] "Starting node config controller"
	I0811 23:44:24.744004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0811 23:44:24.845320       1 shared_informer.go:318] Caches are synced for node config
	I0811 23:44:24.845322       1 shared_informer.go:318] Caches are synced for service config
	I0811 23:44:24.845340       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7447e17b7ff999ce6aa012abad756668d5a050f494e6f1aae1596c4c7f9b7a11] <==
	* I0811 23:44:18.698235       1 serving.go:348] Generated self-signed cert in-memory
	I0811 23:44:23.416698       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0811 23:44:23.416804       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0811 23:44:23.422723       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0811 23:44:23.422755       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0811 23:44:23.422975       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0811 23:44:23.422999       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0811 23:44:23.423026       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0811 23:44:23.423032       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0811 23:44:23.423997       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0811 23:44:23.424123       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0811 23:44:23.527379       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0811 23:44:23.527518       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0811 23:44:23.527629       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9ecaf1f2abdda582ed04056f0de0846a97e5803f559b05701a13e57372cf544a] <==
	* 
	* 
	* ==> kubelet <==
	* Aug 11 23:44:15 pause-634825 kubelet[3206]: E0811 23:44:15.057353    3206 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="pause-634825"
	Aug 11 23:44:16 pause-634825 kubelet[3206]: I0811 23:44:16.659632    3206 kubelet_node_status.go:70] "Attempting to register node" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.386293    3206 kubelet_node_status.go:108] "Node was previously registered" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.386403    3206 kubelet_node_status.go:73] "Successfully registered node" node="pause-634825"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.389505    3206 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.390942    3206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.520970    3206 apiserver.go:52] "Watching apiserver"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524373    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524475    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.524520    3206 topology_manager.go:212] "Topology Admit Handler"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.538811    3206 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.560707    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb66\" (UniqueName: \"kubernetes.io/projected/cf34f80e-7018-4003-b7c5-94c7c8ea41da-kube-api-access-pqb66\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.560830    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khfj\" (UniqueName: \"kubernetes.io/projected/f9473cb2-7b87-40e8-891a-aa651f27406d-kube-api-access-5khfj\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561077    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-xtables-lock\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561333    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9473cb2-7b87-40e8-891a-aa651f27406d-kube-proxy\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561466    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9473cb2-7b87-40e8-891a-aa651f27406d-lib-modules\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561512    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89glp\" (UniqueName: \"kubernetes.io/projected/e3005957-3506-4ea1-a12a-1961a28c67d4-kube-api-access-89glp\") pod \"coredns-5d78c9869d-7zz5s\" (UID: \"e3005957-3506-4ea1-a12a-1961a28c67d4\") " pod="kube-system/coredns-5d78c9869d-7zz5s"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561540    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-cni-cfg\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561564    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf34f80e-7018-4003-b7c5-94c7c8ea41da-lib-modules\") pod \"kindnet-q6qpq\" (UID: \"cf34f80e-7018-4003-b7c5-94c7c8ea41da\") " pod="kube-system/kindnet-q6qpq"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561607    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9473cb2-7b87-40e8-891a-aa651f27406d-xtables-lock\") pod \"kube-proxy-sptbv\" (UID: \"f9473cb2-7b87-40e8-891a-aa651f27406d\") " pod="kube-system/kube-proxy-sptbv"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561659    3206 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3005957-3506-4ea1-a12a-1961a28c67d4-config-volume\") pod \"coredns-5d78c9869d-7zz5s\" (UID: \"e3005957-3506-4ea1-a12a-1961a28c67d4\") " pod="kube-system/coredns-5d78c9869d-7zz5s"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.561735    3206 reconciler.go:41] "Reconciler: start to sync state"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.826011    3206 scope.go:115] "RemoveContainer" containerID="d529e1a8dc40a6ab1f2b51e1d86c5819a7cae02e60dcc86a018f72ddb94523f1"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.827185    3206 scope.go:115] "RemoveContainer" containerID="4425001a8d2b454a91b170a878058197e5f7abe38fa3cdd8e7619a58114a69dd"
	Aug 11 23:44:23 pause-634825 kubelet[3206]: I0811 23:44:23.828253    3206 scope.go:115] "RemoveContainer" containerID="792bd634ed3940953f2c22c64a76c28dabb0b8c94e837fef81ac4a01d8942c96"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634825 -n pause-634825
helpers_test.go:261: (dbg) Run:  kubectl --context pause-634825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (52.92s)

                                                
                                    

Test pass (265/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 27.96
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.27.4/json-events 13.1
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.07
17 TestDownloadOnly/v1.28.0-rc.0/json-events 28.64
18 TestDownloadOnly/v1.28.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.59
29 TestAddons/Setup 152.09
31 TestAddons/parallel/Registry 16.44
33 TestAddons/parallel/InspektorGadget 11.07
34 TestAddons/parallel/MetricsServer 5.89
37 TestAddons/parallel/CSI 40.16
38 TestAddons/parallel/Headlamp 11.74
39 TestAddons/parallel/CloudSpanner 5.73
42 TestAddons/serial/GCPAuth/Namespaces 0.19
43 TestAddons/StoppedEnableDisable 12.33
44 TestCertOptions 36.91
45 TestCertExpiration 283.8
47 TestForceSystemdFlag 41.66
48 TestForceSystemdEnv 45.24
54 TestErrorSpam/setup 31.46
55 TestErrorSpam/start 0.85
56 TestErrorSpam/status 1.1
57 TestErrorSpam/pause 1.99
58 TestErrorSpam/unpause 1.98
59 TestErrorSpam/stop 1.43
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 75.6
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 42.58
66 TestFunctional/serial/KubeContext 0.07
67 TestFunctional/serial/KubectlGetPods 0.11
70 TestFunctional/serial/CacheCmd/cache/add_remote 3.99
71 TestFunctional/serial/CacheCmd/cache/add_local 1.12
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
73 TestFunctional/serial/CacheCmd/cache/list 0.05
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
75 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
76 TestFunctional/serial/CacheCmd/cache/delete 0.11
77 TestFunctional/serial/MinikubeKubectlCmd 0.14
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
79 TestFunctional/serial/ExtraConfig 34.05
80 TestFunctional/serial/ComponentHealth 0.1
81 TestFunctional/serial/LogsCmd 1.81
82 TestFunctional/serial/LogsFileCmd 1.84
83 TestFunctional/serial/InvalidService 4.39
85 TestFunctional/parallel/ConfigCmd 0.47
86 TestFunctional/parallel/DashboardCmd 9.46
87 TestFunctional/parallel/DryRun 0.49
88 TestFunctional/parallel/InternationalLanguage 0.21
89 TestFunctional/parallel/StatusCmd 1.19
93 TestFunctional/parallel/ServiceCmdConnect 10.73
94 TestFunctional/parallel/AddonsCmd 0.17
95 TestFunctional/parallel/PersistentVolumeClaim 24.87
97 TestFunctional/parallel/SSHCmd 0.77
98 TestFunctional/parallel/CpCmd 1.78
100 TestFunctional/parallel/FileSync 0.41
101 TestFunctional/parallel/CertSync 2.36
105 TestFunctional/parallel/NodeLabels 0.13
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
109 TestFunctional/parallel/License 0.37
111 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.55
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.16
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
121 TestFunctional/parallel/ServiceCmd/DeployApp 7.27
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
123 TestFunctional/parallel/ProfileCmd/profile_list 0.42
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
125 TestFunctional/parallel/MountCmd/any-port 8.51
126 TestFunctional/parallel/ServiceCmd/List 0.62
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
129 TestFunctional/parallel/ServiceCmd/Format 0.42
130 TestFunctional/parallel/ServiceCmd/URL 0.47
131 TestFunctional/parallel/MountCmd/specific-port 2.45
132 TestFunctional/parallel/MountCmd/VerifyCleanup 3.28
133 TestFunctional/parallel/Version/short 0.07
134 TestFunctional/parallel/Version/components 1.09
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
139 TestFunctional/parallel/ImageCommands/ImageBuild 2.91
140 TestFunctional/parallel/ImageCommands/Setup 1.93
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.03
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.84
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.52
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.07
151 TestFunctional/delete_addon-resizer_images 0.08
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestIngressAddonLegacy/StartLegacyK8sCluster 95.28
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.01
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
164 TestJSONOutput/start/Command 74.67
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.81
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.74
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.84
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.23
189 TestKicCustomNetwork/create_custom_network 43.24
190 TestKicCustomNetwork/use_default_bridge_network 37.84
191 TestKicExistingNetwork 33.21
192 TestKicCustomSubnet 32.49
193 TestKicStaticIP 33.43
194 TestMainNoArgs 0.05
195 TestMinikubeProfile 76.87
198 TestMountStart/serial/StartWithMountFirst 9.69
199 TestMountStart/serial/VerifyMountFirst 0.28
200 TestMountStart/serial/StartWithMountSecond 7.13
201 TestMountStart/serial/VerifyMountSecond 0.29
202 TestMountStart/serial/DeleteFirst 1.68
203 TestMountStart/serial/VerifyMountPostDelete 0.29
204 TestMountStart/serial/Stop 1.22
205 TestMountStart/serial/RestartStopped 7.94
206 TestMountStart/serial/VerifyMountPostStop 0.29
209 TestMultiNode/serial/FreshStart2Nodes 98.56
210 TestMultiNode/serial/DeployApp2Nodes 5.42
212 TestMultiNode/serial/AddNode 50.78
213 TestMultiNode/serial/ProfileList 0.35
214 TestMultiNode/serial/CopyFile 11.12
215 TestMultiNode/serial/StopNode 2.38
216 TestMultiNode/serial/StartAfterStop 12.11
217 TestMultiNode/serial/RestartKeepsNodes 122.06
218 TestMultiNode/serial/DeleteNode 5.11
219 TestMultiNode/serial/StopMultiNode 24.07
220 TestMultiNode/serial/RestartMultiNode 86.37
221 TestMultiNode/serial/ValidateNameConflict 34.24
226 TestPreload 166.29
228 TestScheduledStopUnix 109.38
231 TestInsufficientStorage 13.41
234 TestKubernetesUpgrade 384.08
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
238 TestNoKubernetes/serial/StartWithK8s 39.92
239 TestNoKubernetes/serial/StartWithStopK8s 10.11
240 TestNoKubernetes/serial/Start 10.73
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.49
242 TestNoKubernetes/serial/ProfileList 1.12
243 TestNoKubernetes/serial/Stop 1.38
244 TestNoKubernetes/serial/StartNoArgs 7.72
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
246 TestStoppedBinaryUpgrade/Setup 1.17
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
257 TestPause/serial/Start 79.46
266 TestNetworkPlugins/group/false 6.28
271 TestStartStop/group/old-k8s-version/serial/FirstStart 136.93
272 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.94
274 TestStartStop/group/old-k8s-version/serial/Stop 12.12
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
276 TestStartStop/group/old-k8s-version/serial/SecondStart 431.34
278 TestStartStop/group/no-preload/serial/FirstStart 104.18
279 TestStartStop/group/no-preload/serial/DeployApp 8.56
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.24
281 TestStartStop/group/no-preload/serial/Stop 12.16
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
283 TestStartStop/group/no-preload/serial/SecondStart 349.25
284 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
285 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
286 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
287 TestStartStop/group/old-k8s-version/serial/Pause 3.5
289 TestStartStop/group/embed-certs/serial/FirstStart 79.72
290 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.03
291 TestStartStop/group/embed-certs/serial/DeployApp 9.68
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.51
294 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
295 TestStartStop/group/no-preload/serial/Pause 3.92
296 TestStartStop/group/embed-certs/serial/Stop 12.41
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.01
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
300 TestStartStop/group/embed-certs/serial/SecondStart 630.36
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 348.58
306 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.04
307 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
308 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.39
309 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
311 TestStartStop/group/newest-cni/serial/FirstStart 45.43
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
314 TestStartStop/group/newest-cni/serial/Stop 1.29
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/newest-cni/serial/SecondStart 30.79
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
320 TestStartStop/group/newest-cni/serial/Pause 3.21
321 TestNetworkPlugins/group/auto/Start 81.41
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
323 TestNetworkPlugins/group/auto/KubeletFlags 0.32
324 TestNetworkPlugins/group/auto/NetCatPod 12.43
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
327 TestStartStop/group/embed-certs/serial/Pause 3.43
328 TestNetworkPlugins/group/auto/DNS 0.3
329 TestNetworkPlugins/group/auto/Localhost 0.23
330 TestNetworkPlugins/group/auto/HairPin 0.22
331 TestNetworkPlugins/group/kindnet/Start 84.34
332 TestNetworkPlugins/group/calico/Start 73.32
333 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
335 TestNetworkPlugins/group/kindnet/NetCatPod 11.41
336 TestNetworkPlugins/group/calico/ControllerPod 5.04
337 TestNetworkPlugins/group/kindnet/DNS 0.25
338 TestNetworkPlugins/group/kindnet/Localhost 0.27
339 TestNetworkPlugins/group/kindnet/HairPin 0.22
340 TestNetworkPlugins/group/calico/KubeletFlags 0.35
341 TestNetworkPlugins/group/calico/NetCatPod 11.5
342 TestNetworkPlugins/group/calico/DNS 0.28
343 TestNetworkPlugins/group/calico/Localhost 0.29
344 TestNetworkPlugins/group/calico/HairPin 0.26
345 TestNetworkPlugins/group/custom-flannel/Start 73.95
346 TestNetworkPlugins/group/enable-default-cni/Start 90.58
347 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
348 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
349 TestNetworkPlugins/group/custom-flannel/DNS 0.22
350 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
351 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
352 TestNetworkPlugins/group/flannel/Start 71.79
353 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
354 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.59
355 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
356 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
357 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
358 TestNetworkPlugins/group/bridge/Start 88.29
359 TestNetworkPlugins/group/flannel/ControllerPod 5.04
360 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
361 TestNetworkPlugins/group/flannel/NetCatPod 10.42
362 TestNetworkPlugins/group/flannel/DNS 0.23
363 TestNetworkPlugins/group/flannel/Localhost 0.18
364 TestNetworkPlugins/group/flannel/HairPin 0.18
365 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
366 TestNetworkPlugins/group/bridge/NetCatPod 10.35
367 TestNetworkPlugins/group/bridge/DNS 0.19
368 TestNetworkPlugins/group/bridge/Localhost 0.18
369 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (27.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (27.960186743s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (27.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-038476
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-038476: exit status 85 (73.30833ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-038476        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:00:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:00:41.171035    7640 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:00:41.171159    7640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:41.171168    7640 out.go:309] Setting ErrFile to fd 2...
	I0811 23:00:41.171174    7640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:00:41.171456    7640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	W0811 23:00:41.171585    7640 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: no such file or directory
	I0811 23:00:41.171982    7640 out.go:303] Setting JSON to true
	I0811 23:00:41.172800    7640 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2590,"bootTime":1691792252,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:00:41.172862    7640 start.go:138] virtualization:  
	I0811 23:00:41.176028    7640 out.go:97] [download-only-038476] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	W0811 23:00:41.176245    7640 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball: no such file or directory
	I0811 23:00:41.178356    7640 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:00:41.176380    7640 notify.go:220] Checking for updates...
	I0811 23:00:41.182107    7640 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:00:41.183900    7640 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:00:41.185764    7640 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:00:41.187525    7640 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0811 23:00:41.190944    7640 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:00:41.191591    7640 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:00:41.221344    7640 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:00:41.221429    7640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:00:41.563006    7640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-11 23:00:41.55292727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:00:41.563117    7640 docker.go:294] overlay module found
	I0811 23:00:41.565158    7640 out.go:97] Using the docker driver based on user configuration
	I0811 23:00:41.565183    7640 start.go:298] selected driver: docker
	I0811 23:00:41.565189    7640 start.go:901] validating driver "docker" against <nil>
	I0811 23:00:41.565282    7640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:00:41.635141    7640 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-11 23:00:41.626124436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:00:41.635304    7640 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0811 23:00:41.635577    7640 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0811 23:00:41.635744    7640 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0811 23:00:41.637837    7640 out.go:169] Using Docker driver with root privileges
	I0811 23:00:41.639573    7640 cni.go:84] Creating CNI manager for ""
	I0811 23:00:41.639596    7640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:00:41.639608    7640 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0811 23:00:41.639619    7640 start_flags.go:319] config:
	{Name:download-only-038476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-038476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:00:41.641742    7640 out.go:97] Starting control plane node download-only-038476 in cluster download-only-038476
	I0811 23:00:41.641774    7640 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:00:41.643885    7640 out.go:97] Pulling base image ...
	I0811 23:00:41.643909    7640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0811 23:00:41.644039    7640 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:00:41.663865    7640 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0811 23:00:41.664011    7640 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0811 23:00:41.664120    7640 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0811 23:00:41.716027    7640 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0811 23:00:41.716057    7640 cache.go:57] Caching tarball of preloaded images
	I0811 23:00:41.716225    7640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0811 23:00:41.718439    7640 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0811 23:00:41.718480    7640 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:00:41.868092    7640 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0811 23:00:50.316045    7640 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0811 23:00:53.592925    7640 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:00:53.593025    7640 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:00:54.522776    7640 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0811 23:00:54.523163    7640 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/download-only-038476/config.json ...
	I0811 23:00:54.523196    7640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/download-only-038476/config.json: {Name:mk3e3f44c88d9173981aa87c8f047ee1fb921b9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0811 23:00:54.523378    7640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0811 23:00:54.523597    7640 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-038476"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (13.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.101437976s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (13.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-038476
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-038476: exit status 85 (66.629498ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-038476        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |          |
	|         | -p download-only-038476        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:01:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:01:09.216115    7717 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:01:09.216355    7717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:09.216381    7717 out.go:309] Setting ErrFile to fd 2...
	I0811 23:01:09.216402    7717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:09.216720    7717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	W0811 23:01:09.216883    7717 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: no such file or directory
	I0811 23:01:09.217199    7717 out.go:303] Setting JSON to true
	I0811 23:01:09.217958    7717 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2618,"bootTime":1691792252,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:01:09.218103    7717 start.go:138] virtualization:  
	I0811 23:01:09.220666    7717 out.go:97] [download-only-038476] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:01:09.222924    7717 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:01:09.221546    7717 notify.go:220] Checking for updates...
	I0811 23:01:09.226544    7717 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:01:09.228318    7717 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:01:09.230062    7717 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:01:09.231818    7717 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0811 23:01:09.235368    7717 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:01:09.235893    7717 config.go:182] Loaded profile config "download-only-038476": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0811 23:01:09.235937    7717 start.go:809] api.Load failed for download-only-038476: filestore "download-only-038476": Docker machine "download-only-038476" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:09.236061    7717 driver.go:373] Setting default libvirt URI to qemu:///system
	W0811 23:01:09.236086    7717 start.go:809] api.Load failed for download-only-038476: filestore "download-only-038476": Docker machine "download-only-038476" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:09.260836    7717 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:01:09.260913    7717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:09.359443    7717 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:09.34943395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:09.359548    7717 docker.go:294] overlay module found
	I0811 23:01:09.361700    7717 out.go:97] Using the docker driver based on existing profile
	I0811 23:01:09.361724    7717 start.go:298] selected driver: docker
	I0811 23:01:09.361737    7717 start.go:901] validating driver "docker" against &{Name:download-only-038476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-038476 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:09.361914    7717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:09.440313    7717 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:09.430162353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:09.440768    7717 cni.go:84] Creating CNI manager for ""
	I0811 23:01:09.440778    7717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:01:09.440789    7717 start_flags.go:319] config:
	{Name:download-only-038476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-038476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:09.442633    7717 out.go:97] Starting control plane node download-only-038476 in cluster download-only-038476
	I0811 23:01:09.442660    7717 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:01:09.444293    7717 out.go:97] Pulling base image ...
	I0811 23:01:09.444316    7717 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:01:09.444475    7717 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:01:09.462398    7717 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0811 23:01:09.462514    7717 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0811 23:01:09.462538    7717 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0811 23:01:09.462546    7717 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0811 23:01:09.462554    7717 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0811 23:01:09.520600    7717 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0811 23:01:09.520631    7717 cache.go:57] Caching tarball of preloaded images
	I0811 23:01:09.520776    7717 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0811 23:01:09.522891    7717 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0811 23:01:09.522921    7717 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:01:09.639759    7717 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:94c43c28edd6dc9f776b15426d1b273c -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-038476"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/json-events (28.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-038476 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (28.637669493s)
--- PASS: TestDownloadOnly/v1.28.0-rc.0/json-events (28.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-038476
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-038476: exit status 85 (74.168751ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:00 UTC |          |
	|         | -p download-only-038476           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |          |
	|         | -p download-only-038476           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-038476 | jenkins | v1.31.1 | 11 Aug 23 23:01 UTC |          |
	|         | -p download-only-038476           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/11 23:01:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0811 23:01:22.375532    7794 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:01:22.375725    7794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:22.375753    7794 out.go:309] Setting ErrFile to fd 2...
	I0811 23:01:22.375775    7794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:01:22.376043    7794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	W0811 23:01:22.376191    7794 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: open /home/jenkins/minikube-integration/17044-2333/.minikube/config/config.json: no such file or directory
	I0811 23:01:22.376437    7794 out.go:303] Setting JSON to true
	I0811 23:01:22.377169    7794 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2631,"bootTime":1691792252,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:01:22.377256    7794 start.go:138] virtualization:  
	I0811 23:01:22.379817    7794 out.go:97] [download-only-038476] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:01:22.382111    7794 out.go:169] MINIKUBE_LOCATION=17044
	I0811 23:01:22.380150    7794 notify.go:220] Checking for updates...
	I0811 23:01:22.385629    7794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:01:22.387405    7794 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:01:22.389245    7794 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:01:22.390986    7794 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0811 23:01:22.394420    7794 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0811 23:01:22.394907    7794 config.go:182] Loaded profile config "download-only-038476": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0811 23:01:22.394997    7794 start.go:809] api.Load failed for download-only-038476: filestore "download-only-038476": Docker machine "download-only-038476" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:22.395114    7794 driver.go:373] Setting default libvirt URI to qemu:///system
	W0811 23:01:22.395138    7794 start.go:809] api.Load failed for download-only-038476: filestore "download-only-038476": Docker machine "download-only-038476" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0811 23:01:22.419476    7794 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:01:22.419565    7794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:22.513046    7794 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:22.502155489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:22.513183    7794 docker.go:294] overlay module found
	I0811 23:01:22.515482    7794 out.go:97] Using the docker driver based on existing profile
	I0811 23:01:22.515519    7794 start.go:298] selected driver: docker
	I0811 23:01:22.515526    7794 start.go:901] validating driver "docker" against &{Name:download-only-038476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-038476 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:22.515738    7794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:01:22.590114    7794 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-11 23:01:22.580782407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:01:22.590564    7794 cni.go:84] Creating CNI manager for ""
	I0811 23:01:22.590574    7794 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0811 23:01:22.590586    7794 start_flags.go:319] config:
	{Name:download-only-038476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:download-only-038476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:01:22.592853    7794 out.go:97] Starting control plane node download-only-038476 in cluster download-only-038476
	I0811 23:01:22.592880    7794 cache.go:122] Beginning downloading kic base image for docker with crio
	I0811 23:01:22.594729    7794 out.go:97] Pulling base image ...
	I0811 23:01:22.594755    7794 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime crio
	I0811 23:01:22.594787    7794 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0811 23:01:22.612375    7794 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0811 23:01:22.612516    7794 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0811 23:01:22.612534    7794 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0811 23:01:22.612539    7794 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0811 23:01:22.612546    7794 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0811 23:01:22.655186    7794 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I0811 23:01:22.655209    7794 cache.go:57] Caching tarball of preloaded images
	I0811 23:01:22.655338    7794 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime crio
	I0811 23:01:22.663469    7794 out.go:97] Downloading Kubernetes v1.28.0-rc.0 preload ...
	I0811 23:01:22.663501    7794 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:01:22.780988    7794 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:37544145fde7f7b05f003ed35c9c5933 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I0811 23:01:36.272248    7794 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:01:36.272351    7794 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17044-2333/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I0811 23:01:37.127347    7794 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.0 on crio
	I0811 23:01:37.127491    7794 profile.go:148] Saving config to /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/download-only-038476/config.json ...
	I0811 23:01:37.127713    7794 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime crio
	I0811 23:01:37.127909    7794 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17044-2333/.minikube/cache/linux/arm64/v1.28.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-038476"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-038476
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-747851 --alsologtostderr --binary-mirror http://127.0.0.1:39489 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-747851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-747851
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/Setup (152.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-557401 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-557401 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m32.090057508s)
--- PASS: TestAddons/Setup (152.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 50.9113ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-f97vk" [599382dd-a86d-4d20-b84c-0cd4defdc9a1] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.022320298s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-djx95" [035ad142-8c49-40e5-8d68-7bba6b06c8c0] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017578123s
addons_test.go:316: (dbg) Run:  kubectl --context addons-557401 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-557401 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-557401 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.125453457s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 ip
2023/08/11 23:04:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sw7qp" [7e324f5b-24cd-4b9f-a208-cff76e3df746] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.02058292s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-557401
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-557401: (6.040598637s)
--- PASS: TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.419698ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-lfs9m" [43d07384-7646-4ba7-b848-8899ed88f301] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017892244s
addons_test.go:391: (dbg) Run:  kubectl --context addons-557401 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 14.761665ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-557401 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-557401 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c8f1243d-524a-40e9-9fd8-871fa1c8e46e] Pending
helpers_test.go:344: "task-pv-pod" [c8f1243d-524a-40e9-9fd8-871fa1c8e46e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c8f1243d-524a-40e9-9fd8-871fa1c8e46e] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.021243445s
addons_test.go:560: (dbg) Run:  kubectl --context addons-557401 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-557401 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-557401 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-557401 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-557401 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-557401 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-557401 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-557401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-557401 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c42d9dc4-8389-4daf-94d4-1b309b9629dd] Pending
helpers_test.go:344: "task-pv-pod-restore" [c42d9dc4-8389-4daf-94d4-1b309b9629dd] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.016486441s
addons_test.go:602: (dbg) Run:  kubectl --context addons-557401 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-557401 delete pod task-pv-pod-restore: (1.039810513s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-557401 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-557401 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-557401 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.850220911s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-557401 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-557401 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-557401 --alsologtostderr -v=1: (1.70650954s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-dnjhl" [b4f013d5-8c84-4eed-b3d8-6999e73d6452] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-dnjhl" [b4f013d5-8c84-4eed-b3d8-6999e73d6452] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-dnjhl" [b4f013d5-8c84-4eed-b3d8-6999e73d6452] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.029087554s
--- PASS: TestAddons/parallel/Headlamp (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-hbtzj" [35022d6e-4bf1-4e78-ac26-3b93a126d21c] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015934607s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-557401
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-557401 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-557401 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-557401
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-557401: (12.052220074s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-557401
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-557401
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-557401
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (36.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-650393 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-650393 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.144719407s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-650393 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-650393 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-650393 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-650393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-650393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-650393: (2.03303331s)
--- PASS: TestCertOptions (36.91s)

                                                
                                    
x
+
TestCertExpiration (283.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635223 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635223 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.189801101s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635223 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635223 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (58.589590663s)
helpers_test.go:175: Cleaning up "cert-expiration-635223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-635223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-635223: (3.016599462s)
--- PASS: TestCertExpiration (283.80s)

                                                
                                    
x
+
TestForceSystemdFlag (41.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-847326 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-847326 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.40883643s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-847326 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-847326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-847326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-847326: (4.840708258s)
--- PASS: TestForceSystemdFlag (41.66s)

                                                
                                    
x
+
TestForceSystemdEnv (45.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-031031 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-031031 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.553408104s)
helpers_test.go:175: Cleaning up "force-systemd-env-031031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-031031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-031031: (2.688460566s)
--- PASS: TestForceSystemdEnv (45.24s)

                                                
                                    
x
+
TestErrorSpam/setup (31.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-629393 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-629393 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-629393 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-629393 --driver=docker  --container-runtime=crio: (31.460513352s)
--- PASS: TestErrorSpam/setup (31.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 pause
--- PASS: TestErrorSpam/pause (1.99s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 stop: (1.240163667s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-629393 --log_dir /tmp/nospam-629393 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17044-2333/.minikube/files/etc/test/nested/copy/7634/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0811 23:09:24.803519    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:24.811223    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:24.821487    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:24.841753    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:24.882044    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:24.962326    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:25.122583    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:25.443116    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:26.083993    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:27.364448    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:29.924643    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:35.045198    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:09:45.286321    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:10:05.766542    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-327081 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.594540361s)
--- PASS: TestFunctional/serial/StartWithProxy (75.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --alsologtostderr -v=8
E0811 23:10:46.727175    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-327081 --alsologtostderr -v=8: (42.58025063s)
functional_test.go:659: soft start took 42.584495637s for "functional-327081" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-327081 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:3.1: (1.393011578s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:3.3: (1.459138683s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 cache add registry.k8s.io/pause:latest: (1.13452194s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-327081 /tmp/TestFunctionalserialCacheCmdcacheadd_local2867832542/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache add minikube-local-cache-test:functional-327081
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache delete minikube-local-cache-test:functional-327081
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-327081
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (328.112767ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 cache reload: (1.070995651s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 kubectl -- --context functional-327081 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-327081 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-327081 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.047759319s)
functional_test.go:757: restart took 34.047861088s for "functional-327081" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-327081 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 logs: (1.806347398s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 logs --file /tmp/TestFunctionalserialLogsFileCmd2075555740/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 logs --file /tmp/TestFunctionalserialLogsFileCmd2075555740/001/logs.txt: (1.839973887s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-327081 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-327081
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-327081: exit status 115 (725.782025ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30595 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-327081 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 config get cpus: exit status 14 (93.02719ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 config get cpus: exit status 14 (78.61612ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-327081 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-327081 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 32304: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-327081 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.100614ms)

                                                
                                                
-- stdout --
	* [functional-327081] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:12:31.537649   31887 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:12:31.537780   31887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:12:31.537788   31887 out.go:309] Setting ErrFile to fd 2...
	I0811 23:12:31.537793   31887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:12:31.538103   31887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:12:31.538494   31887 out.go:303] Setting JSON to false
	I0811 23:12:31.539691   31887 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3300,"bootTime":1691792252,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:12:31.539756   31887 start.go:138] virtualization:  
	I0811 23:12:31.542121   31887 out.go:177] * [functional-327081] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:12:31.544185   31887 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:12:31.545864   31887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:12:31.544299   31887 notify.go:220] Checking for updates...
	I0811 23:12:31.549207   31887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:12:31.550839   31887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:12:31.552477   31887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:12:31.554168   31887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:12:31.556288   31887 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:12:31.556821   31887 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:12:31.581711   31887 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:12:31.581832   31887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:12:31.687640   31887 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-11 23:12:31.678065778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:12:31.687741   31887 docker.go:294] overlay module found
	I0811 23:12:31.690958   31887 out.go:177] * Using the docker driver based on existing profile
	I0811 23:12:31.692811   31887 start.go:298] selected driver: docker
	I0811 23:12:31.692829   31887 start.go:901] validating driver "docker" against &{Name:functional-327081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-327081 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:12:31.692949   31887 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:12:31.695350   31887 out.go:177] 
	W0811 23:12:31.697204   31887 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0811 23:12:31.699081   31887 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-327081 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-327081 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.314468ms)

                                                
                                                
-- stdout --
	* [functional-327081] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:12:31.335661   31846 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:12:31.335865   31846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:12:31.335895   31846 out.go:309] Setting ErrFile to fd 2...
	I0811 23:12:31.335914   31846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:12:31.336281   31846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:12:31.336659   31846 out.go:303] Setting JSON to false
	I0811 23:12:31.337733   31846 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3300,"bootTime":1691792252,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:12:31.337830   31846 start.go:138] virtualization:  
	I0811 23:12:31.340362   31846 out.go:177] * [functional-327081] minikube v1.31.1 sur Ubuntu 20.04 (arm64)
	I0811 23:12:31.342437   31846 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:12:31.344122   31846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:12:31.342551   31846 notify.go:220] Checking for updates...
	I0811 23:12:31.347353   31846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:12:31.348877   31846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:12:31.350620   31846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:12:31.352372   31846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:12:31.354355   31846 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:12:31.354913   31846 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:12:31.382301   31846 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:12:31.382389   31846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:12:31.479224   31846 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-11 23:12:31.469644654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:12:31.479331   31846 docker.go:294] overlay module found
	I0811 23:12:31.481156   31846 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0811 23:12:31.482781   31846 start.go:298] selected driver: docker
	I0811 23:12:31.482797   31846 start.go:901] validating driver "docker" against &{Name:functional-327081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-327081 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0811 23:12:31.482929   31846 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:12:31.485030   31846 out.go:177] 
	W0811 23:12:31.486675   31846 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0811 23:12:31.488377   31846 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-327081 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-327081 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-ghhc4" [d4095ca8-1a12-4d67-af8f-3b82baea9dd5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-ghhc4" [d4095ca8-1a12-4d67-af8f-3b82baea9dd5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.016851264s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31909
functional_test.go:1674: http://192.168.49.2:31909: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-ghhc4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31909
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ce607bca-a2f5-4af8-bf52-434e01456ec5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.03468072s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-327081 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-327081 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-327081 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-327081 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [60c1062e-37b4-47e2-98dd-7bdbed711e60] Pending
helpers_test.go:344: "sp-pod" [60c1062e-37b4-47e2-98dd-7bdbed711e60] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0811 23:12:08.647632    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [60c1062e-37b4-47e2-98dd-7bdbed711e60] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.018356826s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-327081 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-327081 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-327081 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f7bcc407-8142-437c-9975-a7241ad56217] Pending
helpers_test.go:344: "sp-pod" [f7bcc407-8142-437c-9975-a7241ad56217] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f7bcc407-8142-437c-9975-a7241ad56217] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.01910027s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-327081 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh -n functional-327081 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 cp functional-327081:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd532445600/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh -n functional-327081 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7634/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /etc/test/nested/copy/7634/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7634.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /etc/ssl/certs/7634.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7634.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /usr/share/ca-certificates/7634.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/76342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /etc/ssl/certs/76342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/76342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /usr/share/ca-certificates/76342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-327081 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "sudo systemctl is-active docker": exit status 1 (444.241546ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "sudo systemctl is-active containerd": exit status 1 (412.350746ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 29996: os: process already finished
helpers_test.go:502: unable to terminate pid 29842: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-327081 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [87175af2-0599-4a40-a40f-4b49ff515d98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [87175af2-0599-4a40-a40f-4b49ff515d98] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.052802623s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-327081 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.124.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-327081 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-327081 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-327081 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-7hskb" [9ecd84a0-68f8-4f78-b308-52952a615618] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-7hskb" [9ecd84a0-68f8-4f78-b308-52952a615618] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.024174844s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "357.623635ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "60.099256ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "341.742992ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "53.222406ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdany-port591809776/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1691795545572509159" to /tmp/TestFunctionalparallelMountCmdany-port591809776/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1691795545572509159" to /tmp/TestFunctionalparallelMountCmdany-port591809776/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1691795545572509159" to /tmp/TestFunctionalparallelMountCmdany-port591809776/001/test-1691795545572509159
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.237193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 11 23:12 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 11 23:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 11 23:12 test-1691795545572509159
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh cat /mount-9p/test-1691795545572509159
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-327081 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [979924e7-f8f8-4c21-b225-24c39b988716] Pending
helpers_test.go:344: "busybox-mount" [979924e7-f8f8-4c21-b225-24c39b988716] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [979924e7-f8f8-4c21-b225-24c39b988716] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [979924e7-f8f8-4c21-b225-24c39b988716] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.025405007s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-327081 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdany-port591809776/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service list -o json
functional_test.go:1493: Took "551.392538ms" to run "out/minikube-linux-arm64 -p functional-327081 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30248
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30248
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdspecific-port4185040939/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (577.584854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdspecific-port4185040939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "sudo umount -f /mount-9p": exit status 1 (460.148336ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-327081 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdspecific-port4185040939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T" /mount1: exit status 1 (1.166133072s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-327081 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-327081 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1705287201/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 version -o=json --components: (1.094048537s)
--- PASS: TestFunctional/parallel/Version/components (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-327081 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-327081
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-327081 image ls --format short --alsologtostderr:
I0811 23:13:00.362016   34406 out.go:296] Setting OutFile to fd 1 ...
I0811 23:13:00.362198   34406 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.362206   34406 out.go:309] Setting ErrFile to fd 2...
I0811 23:13:00.362211   34406 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.362482   34406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
I0811 23:13:00.363118   34406 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.363255   34406 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.363716   34406 cli_runner.go:164] Run: docker container inspect functional-327081 --format={{.State.Status}}
I0811 23:13:00.390901   34406 ssh_runner.go:195] Run: systemctl --version
I0811 23:13:00.390961   34406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-327081
I0811 23:13:00.414994   34406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/functional-327081/id_rsa Username:docker}
I0811 23:13:00.518862   34406 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-327081 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-327081  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.27.4            | 64aece92d6bde | 116MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | ff78c7a65ec2b | 196MB  |
| registry.k8s.io/kube-controller-manager | v1.27.4            | 389f6f052cf83 | 109MB  |
| registry.k8s.io/kube-proxy              | v1.27.4            | 532e5a30e948f | 68.1MB |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 6eb63895cb67f | 57.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                 | alpine             | 7987e0c18af05 | 42.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-327081 image ls --format table --alsologtostderr:
I0811 23:13:00.684528   34467 out.go:296] Setting OutFile to fd 1 ...
I0811 23:13:00.684723   34467 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.684731   34467 out.go:309] Setting ErrFile to fd 2...
I0811 23:13:00.684737   34467 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.684983   34467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
I0811 23:13:00.685653   34467 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.686947   34467 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.687444   34467 cli_runner.go:164] Run: docker container inspect functional-327081 --format={{.State.Status}}
I0811 23:13:00.709604   34467 ssh_runner.go:195] Run: systemctl --version
I0811 23:13:00.709651   34467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-327081
I0811 23:13:00.738604   34467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/functional-327081/id_rsa Username:docker}
I0811 23:13:00.849935   34467 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-327081 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"108667702"},{"id":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"680
99991"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842","repoDigests":["docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca","docker.io/library/nginx@sha256:6faff3cb6b8c141d4828ac6c884a38a680ec6ad122c19397e4774f0bb9616f0c"],"repoTags":["docker.io/library/nginx:latest"],"size":"196443408"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa747
1df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-327081"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"24bc64
e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"116270032"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repo
Digests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"7987e0c18af05e20ea2f672d05e2fe43960553df199d00536b89ea5514c1cf36","repo
Digests":["docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c","docker.io/library/nginx@sha256:6df2b0a2ad7011147efeaacf108b43c8998cdaf5f95afe26d52f14621a80487b"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42747194"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","repoDigests":["registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8","registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"57615158"},{"id":"8cb209
1f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-327081 image ls --format json --alsologtostderr:
I0811 23:13:00.658929   34462 out.go:296] Setting OutFile to fd 1 ...
I0811 23:13:00.659150   34462 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.659176   34462 out.go:309] Setting ErrFile to fd 2...
I0811 23:13:00.659193   34462 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.659486   34462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
I0811 23:13:00.660107   34462 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.660288   34462 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.660790   34462 cli_runner.go:164] Run: docker container inspect functional-327081 --format={{.State.Status}}
I0811 23:13:00.680780   34462 ssh_runner.go:195] Run: systemctl --version
I0811 23:13:00.680826   34462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-327081
I0811 23:13:00.705298   34462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/functional-327081/id_rsa Username:docker}
I0811 23:13:00.815061   34462 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-327081 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-327081
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "116270032"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ff78c7a65ec2b1fb09f58b27b0dd022ac1f4e16b9bcfe1cbdc18c36f2e0e1842
repoDigests:
- docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
- docker.io/library/nginx@sha256:6faff3cb6b8c141d4828ac6c884a38a680ec6ad122c19397e4774f0bb9616f0c
repoTags:
- docker.io/library/nginx:latest
size: "196443408"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: 532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "68099991"
- id: 6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "57615158"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 7987e0c18af05e20ea2f672d05e2fe43960553df199d00536b89ea5514c1cf36
repoDigests:
- docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c
- docker.io/library/nginx@sha256:6df2b0a2ad7011147efeaacf108b43c8998cdaf5f95afe26d52f14621a80487b
repoTags:
- docker.io/library/nginx:alpine
size: "42747194"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "108667702"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-327081 image ls --format yaml --alsologtostderr:
I0811 23:13:00.362237   34405 out.go:296] Setting OutFile to fd 1 ...
I0811 23:13:00.362402   34405 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.362425   34405 out.go:309] Setting ErrFile to fd 2...
I0811 23:13:00.362442   34405 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:00.362776   34405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
I0811 23:13:00.363501   34405 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.363678   34405 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:00.364290   34405 cli_runner.go:164] Run: docker container inspect functional-327081 --format={{.State.Status}}
I0811 23:13:00.393215   34405 ssh_runner.go:195] Run: systemctl --version
I0811 23:13:00.393265   34405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-327081
I0811 23:13:00.424905   34405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/functional-327081/id_rsa Username:docker}
I0811 23:13:00.531164   34405 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-327081 ssh pgrep buildkitd: exit status 1 (317.417633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image build -t localhost/my-image:functional-327081 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image build -t localhost/my-image:functional-327081 testdata/build --alsologtostderr: (2.348550335s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-327081 image build -t localhost/my-image:functional-327081 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c5c51df2fd4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-327081
--> eee36c7154f
Successfully tagged localhost/my-image:functional-327081
eee36c7154f5886877f31aefcc529938f65f4505f5b23957c80e5c026b478998
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-327081 image build -t localhost/my-image:functional-327081 testdata/build --alsologtostderr:
I0811 23:13:01.239670   34568 out.go:296] Setting OutFile to fd 1 ...
I0811 23:13:01.239878   34568 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:01.239907   34568 out.go:309] Setting ErrFile to fd 2...
I0811 23:13:01.239930   34568 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0811 23:13:01.240242   34568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
I0811 23:13:01.240854   34568 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:01.241587   34568 config.go:182] Loaded profile config "functional-327081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0811 23:13:01.242103   34568 cli_runner.go:164] Run: docker container inspect functional-327081 --format={{.State.Status}}
I0811 23:13:01.260603   34568 ssh_runner.go:195] Run: systemctl --version
I0811 23:13:01.260652   34568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-327081
I0811 23:13:01.280506   34568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/functional-327081/id_rsa Username:docker}
I0811 23:13:01.382777   34568 build_images.go:151] Building image from path: /tmp/build.2683809595.tar
I0811 23:13:01.382846   34568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0811 23:13:01.393487   34568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2683809595.tar
I0811 23:13:01.397879   34568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2683809595.tar: stat -c "%s %y" /var/lib/minikube/build/build.2683809595.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2683809595.tar': No such file or directory
I0811 23:13:01.397910   34568 ssh_runner.go:362] scp /tmp/build.2683809595.tar --> /var/lib/minikube/build/build.2683809595.tar (3072 bytes)
I0811 23:13:01.427422   34568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2683809595
I0811 23:13:01.438176   34568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2683809595 -xf /var/lib/minikube/build/build.2683809595.tar
I0811 23:13:01.449401   34568 crio.go:297] Building image: /var/lib/minikube/build/build.2683809595
I0811 23:13:01.449483   34568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-327081 /var/lib/minikube/build/build.2683809595 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0811 23:13:03.510099   34568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-327081 /var/lib/minikube/build/build.2683809595 --cgroup-manager=cgroupfs: (2.060589635s)
I0811 23:13:03.510168   34568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2683809595
I0811 23:13:03.520800   34568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2683809595.tar
I0811 23:13:03.531409   34568 build_images.go:207] Built localhost/my-image:functional-327081 from /tmp/build.2683809595.tar
I0811 23:13:03.531434   34568 build_images.go:123] succeeded building to: functional-327081
I0811 23:13:03.531438   34568 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/08/11 23:12:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.877355331s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-327081
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr: (4.764293337s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr: (2.606134753s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.68871974s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-327081
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image load --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr: (3.573255211s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image save gcr.io/google-containers/addon-resizer:functional-327081 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image rm gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.017311451s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-327081
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-327081 image save --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-327081 image save --daemon gcr.io/google-containers/addon-resizer:functional-327081 --alsologtostderr: (1.026980314s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-327081
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-327081
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-327081
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-327081
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (95.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-200414 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0811 23:14:24.802925    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-200414 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m35.277154176s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (95.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons enable ingress --alsologtostderr -v=5
E0811 23:14:52.488786    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons enable ingress --alsologtostderr -v=5: (12.009608162s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-200414 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-061277 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0811 23:18:21.402482    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-061277 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m14.667757477s)
--- PASS: TestJSONOutput/start/Command (74.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-061277 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-061277 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-061277 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-061277 --output=json --user=testUser: (5.842477832s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-729264 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-729264 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.653262ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50552621-0e04-4ac5-991e-db064efc4fc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-729264] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52129a16-48a9-4046-baf3-ade733f0fa85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17044"}}
	{"specversion":"1.0","id":"bd89e58a-59f7-4eb9-a2c8-83e855cd4ee4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5f73a3bb-0e08-43a0-ac40-cbe449285f24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig"}}
	{"specversion":"1.0","id":"102408cd-1dae-49f5-9618-31d9b1999597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube"}}
	{"specversion":"1.0","id":"d8f1a9d6-9261-4d34-a1d6-40c169d106e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"088d1434-decd-4297-8e75-594b310d8d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"75623024-9492-46b7-ac27-1c7cb572b546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-729264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-729264
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-335060 --network=
E0811 23:19:43.322716    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:19:54.617684    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.622919    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.633162    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.653408    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.693664    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.773924    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:54.934621    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:55.255167    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:55.896098    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:57.176743    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:19:59.736943    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:20:04.858078    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-335060 --network=: (41.072372041s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-335060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-335060
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-335060: (2.140516872s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-974426 --network=bridge
E0811 23:20:15.098354    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:20:35.579226    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-974426 --network=bridge: (35.843803073s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-974426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-974426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-974426: (1.972481809s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.84s)

                                                
                                    
x
+
TestKicExistingNetwork (33.21s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-224786 --network=existing-network
E0811 23:21:16.539483    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-224786 --network=existing-network: (31.028569373s)
helpers_test.go:175: Cleaning up "existing-network-224786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-224786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-224786: (2.014861243s)
--- PASS: TestKicExistingNetwork (33.21s)

                                                
                                    
x
+
TestKicCustomSubnet (32.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-663509 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-663509 --subnet=192.168.60.0/24: (30.426767398s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-663509 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-663509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-663509
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-663509: (2.033921416s)
--- PASS: TestKicCustomSubnet (32.49s)

                                                
                                    
x
+
TestKicStaticIP (33.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-949149 --static-ip=192.168.200.200
E0811 23:21:59.473570    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-949149 --static-ip=192.168.200.200: (31.13874515s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-949149 ip
helpers_test.go:175: Cleaning up "static-ip-949149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-949149
E0811 23:22:27.164139    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-949149: (2.129461966s)
--- PASS: TestKicStaticIP (33.43s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (76.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-979987 --driver=docker  --container-runtime=crio
E0811 23:22:38.461207    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-979987 --driver=docker  --container-runtime=crio: (35.137873038s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-982440 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-982440 --driver=docker  --container-runtime=crio: (36.02194659s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-979987
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-982440
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-982440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-982440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-982440: (2.131658505s)
helpers_test.go:175: Cleaning up "first-979987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-979987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-979987: (2.33534755s)
--- PASS: TestMinikubeProfile (76.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-256688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-256688 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.69279323s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-256688 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-258542 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-258542 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.134596898s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258542 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-256688 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-256688 --alsologtostderr -v=5: (1.681121758s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258542 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-258542
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-258542: (1.224316285s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-258542
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-258542: (6.937797965s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258542 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-891155 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0811 23:24:24.803223    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:24:54.617134    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:25:22.302280    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:25:47.849694    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-891155 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m37.985697674s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-891155 -- rollout status deployment/busybox: (3.304848094s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-qc8x6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-891155 -- exec busybox-67b7f59bb-xv9cw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-891155 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-891155 -v 3 --alsologtostderr: (50.057127269s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.78s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp testdata/cp-test.txt multinode-891155:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2059602082/001/cp-test_multinode-891155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155:/home/docker/cp-test.txt multinode-891155-m02:/home/docker/cp-test_multinode-891155_multinode-891155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test_multinode-891155_multinode-891155-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155:/home/docker/cp-test.txt multinode-891155-m03:/home/docker/cp-test_multinode-891155_multinode-891155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test_multinode-891155_multinode-891155-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp testdata/cp-test.txt multinode-891155-m02:/home/docker/cp-test.txt
E0811 23:26:59.473943    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2059602082/001/cp-test_multinode-891155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m02:/home/docker/cp-test.txt multinode-891155:/home/docker/cp-test_multinode-891155-m02_multinode-891155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test_multinode-891155-m02_multinode-891155.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m02:/home/docker/cp-test.txt multinode-891155-m03:/home/docker/cp-test_multinode-891155-m02_multinode-891155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test_multinode-891155-m02_multinode-891155-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp testdata/cp-test.txt multinode-891155-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2059602082/001/cp-test_multinode-891155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m03:/home/docker/cp-test.txt multinode-891155:/home/docker/cp-test_multinode-891155-m03_multinode-891155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155 "sudo cat /home/docker/cp-test_multinode-891155-m03_multinode-891155.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 cp multinode-891155-m03:/home/docker/cp-test.txt multinode-891155-m02:/home/docker/cp-test_multinode-891155-m03_multinode-891155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 ssh -n multinode-891155-m02 "sudo cat /home/docker/cp-test_multinode-891155-m03_multinode-891155-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-891155 node stop m03: (1.239172469s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-891155 status: exit status 7 (560.548434ms)

                                                
                                                
-- stdout --
	multinode-891155
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-891155-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-891155-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr: exit status 7 (579.027521ms)

                                                
                                                
-- stdout --
	multinode-891155
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-891155-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-891155-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:27:07.923604   80967 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:27:07.923772   80967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:27:07.923780   80967 out.go:309] Setting ErrFile to fd 2...
	I0811 23:27:07.923786   80967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:27:07.924050   80967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:27:07.924227   80967 out.go:303] Setting JSON to false
	I0811 23:27:07.924296   80967 mustload.go:65] Loading cluster: multinode-891155
	I0811 23:27:07.924384   80967 notify.go:220] Checking for updates...
	I0811 23:27:07.924734   80967 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:27:07.924745   80967 status.go:255] checking status of multinode-891155 ...
	I0811 23:27:07.925294   80967 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:27:07.952286   80967 status.go:330] multinode-891155 host status = "Running" (err=<nil>)
	I0811 23:27:07.952329   80967 host.go:66] Checking if "multinode-891155" exists ...
	I0811 23:27:07.952632   80967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155
	I0811 23:27:07.974548   80967 host.go:66] Checking if "multinode-891155" exists ...
	I0811 23:27:07.974842   80967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:27:07.974886   80967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155
	I0811 23:27:07.999652   80967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155/id_rsa Username:docker}
	I0811 23:27:08.103361   80967 ssh_runner.go:195] Run: systemctl --version
	I0811 23:27:08.108893   80967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:27:08.122548   80967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:27:08.214891   80967 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-11 23:27:08.204888304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:27:08.215524   80967 kubeconfig.go:92] found "multinode-891155" server: "https://192.168.58.2:8443"
	I0811 23:27:08.215555   80967 api_server.go:166] Checking apiserver status ...
	I0811 23:27:08.215607   80967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0811 23:27:08.228345   80967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup
	I0811 23:27:08.239752   80967 api_server.go:182] apiserver freezer: "6:freezer:/docker/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/crio/crio-6309b2e0d3037f364555db6eee02a9079d861250f3bbea918e7e587d47f22954"
	I0811 23:27:08.239830   80967 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/91ef6749902a9755bddb5f5abcfefe4686ab106ec29738faccd66ae8be66b0e1/crio/crio-6309b2e0d3037f364555db6eee02a9079d861250f3bbea918e7e587d47f22954/freezer.state
	I0811 23:27:08.250338   80967 api_server.go:204] freezer state: "THAWED"
	I0811 23:27:08.250367   80967 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0811 23:27:08.259317   80967 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0811 23:27:08.259342   80967 status.go:421] multinode-891155 apiserver status = Running (err=<nil>)
	I0811 23:27:08.259354   80967 status.go:257] multinode-891155 status: &{Name:multinode-891155 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:27:08.259370   80967 status.go:255] checking status of multinode-891155-m02 ...
	I0811 23:27:08.259676   80967 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Status}}
	I0811 23:27:08.278079   80967 status.go:330] multinode-891155-m02 host status = "Running" (err=<nil>)
	I0811 23:27:08.278100   80967 host.go:66] Checking if "multinode-891155-m02" exists ...
	I0811 23:27:08.278406   80967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-891155-m02
	I0811 23:27:08.296395   80967 host.go:66] Checking if "multinode-891155-m02" exists ...
	I0811 23:27:08.296692   80967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0811 23:27:08.296738   80967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-891155-m02
	I0811 23:27:08.314854   80967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17044-2333/.minikube/machines/multinode-891155-m02/id_rsa Username:docker}
	I0811 23:27:08.415245   80967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0811 23:27:08.429243   80967 status.go:257] multinode-891155-m02 status: &{Name:multinode-891155-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:27:08.429277   80967 status.go:255] checking status of multinode-891155-m03 ...
	I0811 23:27:08.429577   80967 cli_runner.go:164] Run: docker container inspect multinode-891155-m03 --format={{.State.Status}}
	I0811 23:27:08.447027   80967 status.go:330] multinode-891155-m03 host status = "Stopped" (err=<nil>)
	I0811 23:27:08.447050   80967 status.go:343] host is not running, skipping remaining checks
	I0811 23:27:08.447057   80967 status.go:257] multinode-891155-m03 status: &{Name:multinode-891155-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-891155 node start m03 --alsologtostderr: (11.249198974s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-891155
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-891155
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-891155: (25.074782935s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-891155 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-891155 --wait=true -v=8 --alsologtostderr: (1m36.844239503s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-891155
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 node delete m03
E0811 23:29:24.802924    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-891155 node delete m03: (4.340975515s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-891155 stop: (23.894813369s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-891155 status: exit status 7 (89.339749ms)

                                                
                                                
-- stdout --
	multinode-891155
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-891155-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr: exit status 7 (90.063962ms)

                                                
                                                
-- stdout --
	multinode-891155
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-891155-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:29:51.750984   89133 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:29:51.751100   89133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:29:51.751109   89133 out.go:309] Setting ErrFile to fd 2...
	I0811 23:29:51.751115   89133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:29:51.751369   89133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:29:51.751534   89133 out.go:303] Setting JSON to false
	I0811 23:29:51.751617   89133 mustload.go:65] Loading cluster: multinode-891155
	I0811 23:29:51.751669   89133 notify.go:220] Checking for updates...
	I0811 23:29:51.751989   89133 config.go:182] Loaded profile config "multinode-891155": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:29:51.752001   89133 status.go:255] checking status of multinode-891155 ...
	I0811 23:29:51.753016   89133 cli_runner.go:164] Run: docker container inspect multinode-891155 --format={{.State.Status}}
	I0811 23:29:51.771890   89133 status.go:330] multinode-891155 host status = "Stopped" (err=<nil>)
	I0811 23:29:51.771928   89133 status.go:343] host is not running, skipping remaining checks
	I0811 23:29:51.771935   89133 status.go:257] multinode-891155 status: &{Name:multinode-891155 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0811 23:29:51.771957   89133 status.go:255] checking status of multinode-891155-m02 ...
	I0811 23:29:51.772261   89133 cli_runner.go:164] Run: docker container inspect multinode-891155-m02 --format={{.State.Status}}
	I0811 23:29:51.795248   89133 status.go:330] multinode-891155-m02 host status = "Stopped" (err=<nil>)
	I0811 23:29:51.795270   89133 status.go:343] host is not running, skipping remaining checks
	I0811 23:29:51.795277   89133 status.go:257] multinode-891155-m02 status: &{Name:multinode-891155-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-891155 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0811 23:29:54.617750    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-891155 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m25.596921566s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-891155 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-891155
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-891155-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-891155-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.539929ms)

                                                
                                                
-- stdout --
	* [multinode-891155-m02] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-891155-m02' is duplicated with machine name 'multinode-891155-m02' in profile 'multinode-891155'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-891155-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-891155-m03 --driver=docker  --container-runtime=crio: (31.767407236s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-891155
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-891155: exit status 80 (347.746539ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-891155
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-891155-m03 already exists in multinode-891155-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-891155-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-891155-m03: (1.991530566s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.24s)

                                                
                                    
x
+
TestPreload (166.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-500701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0811 23:31:59.474246    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:33:22.524457    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-500701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m23.632089153s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-500701 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-500701 image pull gcr.io/k8s-minikube/busybox: (2.165715611s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-500701
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-500701: (5.87964506s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-500701 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0811 23:34:24.802938    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-500701 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m11.978611622s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-500701 image list
helpers_test.go:175: Cleaning up "test-preload-500701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-500701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-500701: (2.397592831s)
--- PASS: TestPreload (166.29s)

                                                
                                    
x
+
TestScheduledStopUnix (109.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-964473 --memory=2048 --driver=docker  --container-runtime=crio
E0811 23:34:54.617675    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-964473 --memory=2048 --driver=docker  --container-runtime=crio: (32.770724401s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964473 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-964473 -n scheduled-stop-964473
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964473 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964473 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964473 -n scheduled-stop-964473
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964473
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964473 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0811 23:36:17.662551    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964473
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-964473: exit status 7 (70.79921ms)

                                                
                                                
-- stdout --
	scheduled-stop-964473
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964473 -n scheduled-stop-964473
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964473 -n scheduled-stop-964473: exit status 7 (68.586265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-964473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-964473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-964473: (4.959309299s)
--- PASS: TestScheduledStopUnix (109.38s)

                                                
                                    
x
+
TestInsufficientStorage (13.41s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-256114 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-256114 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.860170917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6d7b6b8-1591-4dd5-b79e-d95775b2a7c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-256114] minikube v1.31.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09dbb81c-54e6-4f5f-acc2-72e39a9626b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17044"}}
	{"specversion":"1.0","id":"1e6fafc7-5487-43fe-8ce1-70d24fd2292d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03b62ecc-bed8-4261-92bf-9730a5a81bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig"}}
	{"specversion":"1.0","id":"be20f5e2-5ece-495a-b5f9-064612c9e9f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube"}}
	{"specversion":"1.0","id":"6f6b2c8f-c59f-4a8f-bfe1-3801fb0d68bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b6b1a18c-9233-4fcd-adaa-bcc1b2dcdd0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fe5a69cb-9cec-4385-b308-f412c5252a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dd6d300d-42bf-4576-90a1-472a2fc77d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dbba01ac-becb-417f-9d70-7f9881d47692","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceec45db-81f3-45c4-8486-5bb1877860f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ed70764f-b0f3-4ec6-be32-ff9a0188a14f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-256114 in cluster insufficient-storage-256114","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6ff9913-c4d2-48d8-9c6a-c96ebe90b9ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c012831-837e-4c7e-ac17-19ad41154970","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e73597a-cd72-418a-9465-638eb307b138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-256114 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-256114 --output=json --layout=cluster: exit status 7 (318.404646ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256114","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256114","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 23:36:45.748322  105834 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-256114" does not appear in /home/jenkins/minikube-integration/17044-2333/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-256114 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-256114 --output=json --layout=cluster: exit status 7 (325.738932ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256114","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256114","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0811 23:36:46.075403  105889 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-256114" does not appear in /home/jenkins/minikube-integration/17044-2333/kubeconfig
	E0811 23:36:46.087541  105889 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/insufficient-storage-256114/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-256114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-256114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-256114: (1.907946117s)
--- PASS: TestInsufficientStorage (13.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.846776496s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-788862
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-788862: (1.2688049s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-788862 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-788862 status --format={{.Host}}: exit status 7 (79.736855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0811 23:39:24.802866    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.08123599s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-788862 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (82.52262ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-788862] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-788862
	    minikube start -p kubernetes-upgrade-788862 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7888622 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-788862 --kubernetes-version=v1.28.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-788862 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.895785072s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-788862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-788862
E0811 23:44:24.803327    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-788862: (2.713689956s)
--- PASS: TestKubernetesUpgrade (384.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.928613ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-886838] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-886838 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-886838 --driver=docker  --container-runtime=crio: (39.31537553s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-886838 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --driver=docker  --container-runtime=crio: (7.471518103s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-886838 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-886838 status -o json: exit status 2 (453.004984ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-886838","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-886838
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-886838: (2.184674066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-886838 --no-kubernetes --driver=docker  --container-runtime=crio: (10.728244519s)
--- PASS: TestNoKubernetes/serial/Start (10.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-886838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-886838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (493.481358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-886838
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-886838: (1.377189769s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-886838 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-886838 --driver=docker  --container-runtime=crio: (7.719931821s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-886838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-886838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.087964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0811 23:39:54.617128    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-773979
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestPause/serial/Start (79.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-634825 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-634825 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.456762881s)
--- PASS: TestPause/serial/Start (79.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-411151 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-411151 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (345.842719ms)

                                                
                                                
-- stdout --
	* [false-411151] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17044
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0811 23:44:49.404566  145119 out.go:296] Setting OutFile to fd 1 ...
	I0811 23:44:49.404783  145119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:49.404808  145119 out.go:309] Setting ErrFile to fd 2...
	I0811 23:44:49.404826  145119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0811 23:44:49.407408  145119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17044-2333/.minikube/bin
	I0811 23:44:49.407907  145119 out.go:303] Setting JSON to false
	I0811 23:44:49.409218  145119 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5238,"bootTime":1691792252,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0811 23:44:49.409346  145119 start.go:138] virtualization:  
	I0811 23:44:49.412309  145119 out.go:177] * [false-411151] minikube v1.31.1 on Ubuntu 20.04 (arm64)
	I0811 23:44:49.414465  145119 out.go:177]   - MINIKUBE_LOCATION=17044
	I0811 23:44:49.416619  145119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0811 23:44:49.414509  145119 notify.go:220] Checking for updates...
	I0811 23:44:49.419010  145119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17044-2333/kubeconfig
	I0811 23:44:49.421006  145119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17044-2333/.minikube
	I0811 23:44:49.422708  145119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0811 23:44:49.424578  145119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0811 23:44:49.426847  145119 config.go:182] Loaded profile config "force-systemd-flag-847326": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0811 23:44:49.426970  145119 driver.go:373] Setting default libvirt URI to qemu:///system
	I0811 23:44:49.496966  145119 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0811 23:44:49.497156  145119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0811 23:44:49.652114  145119 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-11 23:44:49.635010112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0811 23:44:49.652219  145119 docker.go:294] overlay module found
	I0811 23:44:49.654139  145119 out.go:177] * Using the docker driver based on user configuration
	I0811 23:44:49.655606  145119 start.go:298] selected driver: docker
	I0811 23:44:49.655618  145119 start.go:901] validating driver "docker" against <nil>
	I0811 23:44:49.655630  145119 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0811 23:44:49.657685  145119 out.go:177] 
	W0811 23:44:49.659159  145119 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0811 23:44:49.660899  145119 out.go:177] 

                                                
                                                
** /stderr **
E0811 23:44:54.621219    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-411151 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-411151

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-411151"

                                                
                                                
----------------------- debugLogs end: false-411151 [took: 5.647887828s] --------------------------------
helpers_test.go:175: Cleaning up "false-411151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-411151
--- PASS: TestNetworkPlugins/group/false (6.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-798936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0811 23:46:59.473737    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-798936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m16.926336307s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-798936 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cbd30571-89cd-43a2-a40f-f045c206d0c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cbd30571-89cd-43a2-a40f-f045c206d0c1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.02922191s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-798936 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-798936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-798936 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.618106337s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-798936 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-798936 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-798936 --alsologtostderr -v=3: (12.119983206s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-798936 -n old-k8s-version-798936
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-798936 -n old-k8s-version-798936: exit status 7 (76.65155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-798936 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (431.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-798936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0811 23:49:24.803368    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-798936 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m10.923449828s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-798936 -n old-k8s-version-798936
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (431.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-683425 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
E0811 23:49:54.617809    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:50:02.524664    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-683425 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (1m44.175544614s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-683425 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb9b608b-11ae-4fc6-9d41-025f0562d0db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb9b608b-11ae-4fc6-9d41-025f0562d0db] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.031629316s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-683425 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-683425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-683425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.101927971s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-683425 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-683425 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-683425 --alsologtostderr -v=3: (12.155394221s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683425 -n no-preload-683425
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683425 -n no-preload-683425: exit status 7 (66.08836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-683425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-683425 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
E0811 23:51:59.473637    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0811 23:52:57.663729    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0811 23:54:24.803346    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:54:54.618004    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-683425 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (5m48.685986799s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683425 -n no-preload-683425
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dtn7w" [d5314775-6027-428e-b5af-7651b7eaa6aa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026979979s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dtn7w" [d5314775-6027-428e-b5af-7651b7eaa6aa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009340047s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-798936 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-798936 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-798936 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-798936 -n old-k8s-version-798936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-798936 -n old-k8s-version-798936: exit status 2 (354.881576ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-798936 -n old-k8s-version-798936
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-798936 -n old-k8s-version-798936: exit status 2 (381.692048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-798936 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-798936 -n old-k8s-version-798936
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-798936 -n old-k8s-version-798936
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-947522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0811 23:56:59.473584    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-947522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m19.715758386s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gzjf8" [2c923ce9-1fc9-4741-bbf6-2608250b0256] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gzjf8" [2c923ce9-1fc9-4741-bbf6-2608250b0256] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.033480353s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-947522 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e10fc4c1-3889-49b0-bdbe-264b08ced1dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e10fc4c1-3889-49b0-bdbe-264b08ced1dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.060662393s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-947522 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gzjf8" [2c923ce9-1fc9-4741-bbf6-2608250b0256] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010679968s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-683425 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-947522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-947522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.330814721s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-947522 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-683425 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-683425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-683425 --alsologtostderr -v=1: (1.183097226s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683425 -n no-preload-683425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683425 -n no-preload-683425: exit status 2 (385.875443ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683425 -n no-preload-683425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683425 -n no-preload-683425: exit status 2 (488.302632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-683425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683425 -n no-preload-683425
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683425 -n no-preload-683425
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-947522 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-947522 --alsologtostderr -v=3: (12.413521188s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-916572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-916572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m21.011311419s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-947522 -n embed-certs-947522
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-947522 -n embed-certs-947522: exit status 7 (109.528142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-947522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (630.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-947522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0811 23:58:40.283733    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.288959    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.299208    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.319436    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.359661    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.439914    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.600288    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:40.920766    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:41.561558    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:42.841687    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:45.401888    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:58:50.522174    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:59:00.762920    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:59:07.850512    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0811 23:59:21.243205    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0811 23:59:24.802983    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-947522 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (10m29.860932248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-947522 -n embed-certs-947522
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (630.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-916572 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f71b2c8-3ce9-4ed7-9ab8-c1420c290d49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f71b2c8-3ce9-4ed7-9ab8-c1420c290d49] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.027260675s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-916572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-916572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-916572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.147587576s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-916572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-916572 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-916572 --alsologtostderr -v=3: (12.032906548s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572: exit status 7 (84.275983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-916572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-916572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0811 23:59:54.617239    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0812 00:00:02.204214    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0812 00:01:24.124861    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0812 00:01:35.866274    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:35.871532    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:35.881846    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:35.902144    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:35.942387    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:36.022770    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:36.183147    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:36.503500    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:37.143712    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:38.423855    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:40.984458    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:46.105480    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:56.345820    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:01:59.473600    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0812 00:02:16.826017    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:02:57.786216    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:03:40.283839    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0812 00:04:07.965754    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0812 00:04:19.706427    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:04:24.802717    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
E0812 00:04:54.617749    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-916572 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m47.993743713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (348.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-h92gv" [2b038c4e-0021-4dbd-ada3-11e86428c694] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-h92gv" [2b038c4e-0021-4dbd-ada3-11e86428c694] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.034129564s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-h92gv" [2b038c4e-0021-4dbd-ada3-11e86428c694] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011837298s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-916572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-916572 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-916572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572: exit status 2 (373.948192ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572: exit status 2 (347.056648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-916572 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-916572 -n default-k8s-diff-port-916572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-729103 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
E0812 00:06:35.865814    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:06:42.524859    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-729103 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (45.430119567s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-729103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-729103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113485323s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-729103 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-729103 --alsologtostderr -v=3: (1.287463887s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729103 -n newest-cni-729103
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729103 -n newest-cni-729103: exit status 7 (67.357188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-729103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-729103 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
E0812 00:06:59.474266    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0812 00:07:03.547528    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-729103 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (30.356954358s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-729103 -n newest-cni-729103
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-729103 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-729103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729103 -n newest-cni-729103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729103 -n newest-cni-729103: exit status 2 (358.212591ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729103 -n newest-cni-729103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729103 -n newest-cni-729103: exit status 2 (364.802126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-729103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-729103 -n newest-cni-729103
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-729103 -n newest-cni-729103
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0812 00:08:40.283925    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.410412333s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-dvmwr" [b5248f43-4a6e-4a8c-8535-399b5fc34067] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028795194s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5nx9d" [2019a1bb-fbc3-416c-9001-66cd55959fe8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-5nx9d" [2019a1bb-fbc3-416c-9001-66cd55959fe8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010963798s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-dvmwr" [b5248f43-4a6e-4a8c-8535-399b5fc34067] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011611864s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-947522 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-947522 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-947522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-947522 -n embed-certs-947522
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-947522 -n embed-certs-947522: exit status 2 (363.20115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-947522 -n embed-certs-947522
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-947522 -n embed-certs-947522: exit status 2 (372.057537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-947522 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-947522 -n embed-certs-947522
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-947522 -n embed-certs-947522
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)
E0812 00:14:54.617194    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0812 00:14:58.612644    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:15:03.325911    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0812 00:09:24.803141    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.340260044s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0812 00:09:30.925937    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:30.931164    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:30.941395    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:30.961614    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:31.001887    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:31.082140    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:31.242525    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:31.563013    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:32.203877    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:33.484318    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:36.045198    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:37.664648    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0812 00:09:41.166000    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:51.406345    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
E0812 00:09:54.617551    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/ingress-addon-legacy-200414/client.crt: no such file or directory
E0812 00:10:11.886964    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.315532703s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n5f84" [a93eaa7b-9794-4005-a14b-518ccea425df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.044002077s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dq5hj" [e3e2bcbd-677a-4265-8310-087d459503f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-dq5hj" [e3e2bcbd-677a-4265-8310-087d459503f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.012391662s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w5nvh" [f2b3bdb8-e795-40fc-b7f1-aceafec0737f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.042902005s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-tmrhf" [481b875c-7924-4eab-bf63-3fa4bf815b36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0812 00:10:52.851279    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-tmrhf" [481b875c-7924-4eab-bf63-3fa4bf815b36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.019687853s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.946742944s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0812 00:11:35.866039    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/no-preload-683425/client.crt: no such file or directory
E0812 00:11:59.474517    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/functional-327081/client.crt: no such file or directory
E0812 00:12:14.771797    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/default-k8s-diff-port-916572/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.575856017s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-cd7wt" [550e7c84-9770-4b96-9dbf-7caa0f4269c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-cd7wt" [550e7c84-9770-4b96-9dbf-7caa0f4269c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.010758206s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.794442424s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-bd6vf" [7ee53e67-bd92-4e9e-9925-280149cbbabb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-bd6vf" [7ee53e67-bd92-4e9e-9925-280149cbbabb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.010395651s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0812 00:13:40.284052    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/old-k8s-version-798936/client.crt: no such file or directory
E0812 00:13:50.265962    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.273203    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.283713    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.304021    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.344179    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.425114    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.585243    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:50.905934    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:51.546292    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:52.826473    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:13:55.387098    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
E0812 00:14:00.507633    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-411151 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.294360613s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5pzsh" [2ae438b4-f627-4fdd-9947-1c7020c4711e] Running
E0812 00:14:10.748710    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.036272524s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2nd8w" [b0806f0a-20ca-4f2f-a69c-63774aa1a48f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2nd8w" [b0806f0a-20ca-4f2f-a69c-63774aa1a48f] Running
E0812 00:14:24.803599    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/addons-557401/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.013377626s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-411151 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-411151 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n9wtz" [85d779c6-2042-4e52-be26-ba2fc67dc1ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n9wtz" [85d779c6-2042-4e52-be26-ba2fc67dc1ce] Running
E0812 00:15:12.190466    7634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17044-2333/.minikube/profiles/auto-411151/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010818004s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-411151 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-411151 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (32/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-499597 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-499597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-499597
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-787182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-787182
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-411151 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-411151

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-411151"

                                                
                                                
----------------------- debugLogs end: kubenet-411151 [took: 4.147080282s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-411151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-411151
--- SKIP: TestNetworkPlugins/group/kubenet (4.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-411151 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-411151" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-411151

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-411151" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411151"

                                                
                                                
----------------------- debugLogs end: cilium-411151 [took: 4.876060792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-411151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-411151
--- SKIP: TestNetworkPlugins/group/cilium (5.29s)

                                                
                                    
Copied to clipboard