Test Report: Docker_Linux_crio 17011

                    
                      4d909ae33ff265fc050ea07aeaa703b9386ea7a9:2023-08-09:30510
                    
                

Test fail (6/304)

Order failed test Duration
32 TestAddons/parallel/Ingress 152.17
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.82
211 TestMultiNode/serial/PingHostFrom2Pods 3.19
232 TestRunningBinaryUpgrade 73.24
258 TestStoppedBinaryUpgrade/Upgrade 64.11
270 TestPause/serial/SecondStartNoReconfiguration 72.77
x
+
TestAddons/parallel/Ingress (152.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-922218 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-922218 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-922218 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [40720706-58e8-49b3-9f8d-0fa92e29ac7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [40720706-58e8-49b3-9f8d-0fa92e29ac7e] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009702197s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-922218 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.825362888s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-922218 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-922218 addons disable ingress-dns --alsologtostderr -v=1: (1.384204654s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-922218 addons disable ingress --alsologtostderr -v=1: (7.623370423s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-922218
helpers_test.go:235: (dbg) docker inspect addons-922218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb",
	        "Created": "2023-08-09T18:39:46.973530202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 825118,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T18:39:47.24278215Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51eee4927f7e218e70017d38db072c77f0b6036bbfe389eac8043694e7529d58",
	        "ResolvConfPath": "/var/lib/docker/containers/65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb/hosts",
	        "LogPath": "/var/lib/docker/containers/65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb/65b18d842d49c7d85b07ae7190d22f42855d771f2aa43d8a4a65c1c4b853bbbb-json.log",
	        "Name": "/addons-922218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-922218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-922218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8a73a97ac18bf4e535af51aa399358365f73225fc2dc93114602ccd892089a5f-init/diff:/var/lib/docker/overlay2/dffcbda35d4e6780372e77e03c9f976a612c164e3ac348da817dd7b6996e96fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a73a97ac18bf4e535af51aa399358365f73225fc2dc93114602ccd892089a5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a73a97ac18bf4e535af51aa399358365f73225fc2dc93114602ccd892089a5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a73a97ac18bf4e535af51aa399358365f73225fc2dc93114602ccd892089a5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-922218",
	                "Source": "/var/lib/docker/volumes/addons-922218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-922218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-922218",
	                "name.minikube.sigs.k8s.io": "addons-922218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09026e9aeec595b41fd45d095097be70fb1e236988449fc2d075a258dd81eb0a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/09026e9aeec5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-922218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "65b18d842d49",
	                        "addons-922218"
	                    ],
	                    "NetworkID": "707d10ab89bf20cf43ac1985c1649c957eb9f3f183b45e074a4b1b9886b287db",
	                    "EndpointID": "f3f71ff2d91cb6d8d3626182dadcd940da0bee00c286277ebd2b1b7779406147",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-922218 -n addons-922218
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-922218 logs -n 25: (1.147518769s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-649799   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |                     |
	|         | -p download-only-649799           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-649799   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |                     |
	|         | -p download-only-649799           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-649799   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |                     |
	|         | -p download-only-649799           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.0 |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:39 UTC |
	| delete  | -p download-only-649799           | download-only-649799   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:39 UTC |
	| delete  | -p download-only-649799           | download-only-649799   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:39 UTC |
	| start   | --download-only -p                | download-docker-720217 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |                     |
	|         | download-docker-720217            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p download-docker-720217         | download-docker-720217 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:39 UTC |
	| start   | --download-only -p                | binary-mirror-109939   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |                     |
	|         | binary-mirror-109939              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40831            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-109939           | binary-mirror-109939   | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:39 UTC |
	| start   | -p addons-922218                  | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC | 09 Aug 23 18:41 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	|         | --addons=helm-tiller              |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | -p addons-922218                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | addons-922218                     |                        |         |         |                     |                     |
	| addons  | addons-922218 addons              | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-922218 ip                  | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	| addons  | addons-922218 addons disable      | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | addons-922218                     |                        |         |         |                     |                     |
	| addons  | addons-922218 addons disable      | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC | 09 Aug 23 18:41 UTC |
	|         | helm-tiller --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| ssh     | addons-922218 ssh curl -s         | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| addons  | addons-922218 addons              | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:43 UTC | 09 Aug 23 18:43 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-922218 addons              | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:43 UTC | 09 Aug 23 18:43 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-922218 ip                  | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:44 UTC | 09 Aug 23 18:44 UTC |
	| addons  | addons-922218 addons disable      | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:44 UTC | 09 Aug 23 18:44 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-922218 addons disable      | addons-922218          | jenkins | v1.31.1 | 09 Aug 23 18:44 UTC | 09 Aug 23 18:44 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:39:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:39:24.843002  824462 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:39:24.843115  824462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:24.843123  824462 out.go:309] Setting ErrFile to fd 2...
	I0809 18:39:24.843127  824462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:24.843324  824462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 18:39:24.843967  824462 out.go:303] Setting JSON to false
	I0809 18:39:24.845434  824462 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8520,"bootTime":1691597845,"procs":783,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:39:24.845496  824462 start.go:138] virtualization: kvm guest
	I0809 18:39:24.847768  824462 out.go:177] * [addons-922218] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:39:24.849604  824462 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 18:39:24.851021  824462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:39:24.849669  824462 notify.go:220] Checking for updates...
	I0809 18:39:24.852882  824462 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:39:24.854560  824462 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:39:24.856840  824462 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 18:39:24.858533  824462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 18:39:24.860199  824462 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:39:24.881552  824462 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:39:24.881691  824462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:24.932374  824462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-09 18:39:24.924038053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:24.932475  824462 docker.go:294] overlay module found
	I0809 18:39:24.934407  824462 out.go:177] * Using the docker driver based on user configuration
	I0809 18:39:24.935736  824462 start.go:298] selected driver: docker
	I0809 18:39:24.935758  824462 start.go:901] validating driver "docker" against <nil>
	I0809 18:39:24.935770  824462 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 18:39:24.936522  824462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:24.990359  824462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-09 18:39:24.982035434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:24.990510  824462 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 18:39:24.990729  824462 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 18:39:24.992477  824462 out.go:177] * Using Docker driver with root privileges
	I0809 18:39:24.993914  824462 cni.go:84] Creating CNI manager for ""
	I0809 18:39:24.993928  824462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:39:24.993938  824462 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 18:39:24.993951  824462 start_flags.go:319] config:
	{Name:addons-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-922218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:24.995406  824462 out.go:177] * Starting control plane node addons-922218 in cluster addons-922218
	I0809 18:39:24.996638  824462 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:39:24.997789  824462 out.go:177] * Pulling base image ...
	I0809 18:39:24.998938  824462 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:39:24.998965  824462 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 18:39:24.998960  824462 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:39:24.998975  824462 cache.go:57] Caching tarball of preloaded images
	I0809 18:39:24.999064  824462 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 18:39:24.999077  824462 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 18:39:24.999452  824462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/config.json ...
	I0809 18:39:24.999480  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/config.json: {Name:mkfd9a8129548f4f532240142f04764187e6b82e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:25.014586  824462 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0809 18:39:25.014732  824462 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0809 18:39:25.014749  824462 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0809 18:39:25.014753  824462 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0809 18:39:25.014762  824462 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0809 18:39:25.014769  824462 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 from local cache
	I0809 18:39:37.587590  824462 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 from cached tarball
	I0809 18:39:37.587629  824462 cache.go:195] Successfully downloaded all kic artifacts
	I0809 18:39:37.587688  824462 start.go:365] acquiring machines lock for addons-922218: {Name:mkf98af2b0668816f40a78ba0eebaa59293d6c08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 18:39:37.587823  824462 start.go:369] acquired machines lock for "addons-922218" in 108.571µs
	I0809 18:39:37.587859  824462 start.go:93] Provisioning new machine with config: &{Name:addons-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-922218 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:39:37.587993  824462 start.go:125] createHost starting for "" (driver="docker")
	I0809 18:39:37.663858  824462 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0809 18:39:37.664136  824462 start.go:159] libmachine.API.Create for "addons-922218" (driver="docker")
	I0809 18:39:37.664170  824462 client.go:168] LocalClient.Create starting
	I0809 18:39:37.664334  824462 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 18:39:37.820346  824462 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 18:39:37.999435  824462 cli_runner.go:164] Run: docker network inspect addons-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0809 18:39:38.015764  824462 cli_runner.go:211] docker network inspect addons-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0809 18:39:38.015843  824462 network_create.go:281] running [docker network inspect addons-922218] to gather additional debugging logs...
	I0809 18:39:38.015864  824462 cli_runner.go:164] Run: docker network inspect addons-922218
	W0809 18:39:38.031322  824462 cli_runner.go:211] docker network inspect addons-922218 returned with exit code 1
	I0809 18:39:38.031362  824462 network_create.go:284] error running [docker network inspect addons-922218]: docker network inspect addons-922218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-922218 not found
	I0809 18:39:38.031375  824462 network_create.go:286] output of [docker network inspect addons-922218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-922218 not found
	
	** /stderr **
	I0809 18:39:38.031446  824462 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:39:38.049143  824462 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00136c8f0}
	I0809 18:39:38.049186  824462 network_create.go:123] attempt to create docker network addons-922218 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0809 18:39:38.049243  824462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-922218 addons-922218
	I0809 18:39:38.297621  824462 network_create.go:107] docker network addons-922218 192.168.49.0/24 created
	I0809 18:39:38.297658  824462 kic.go:117] calculated static IP "192.168.49.2" for the "addons-922218" container
	I0809 18:39:38.297746  824462 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 18:39:38.312579  824462 cli_runner.go:164] Run: docker volume create addons-922218 --label name.minikube.sigs.k8s.io=addons-922218 --label created_by.minikube.sigs.k8s.io=true
	I0809 18:39:38.352504  824462 oci.go:103] Successfully created a docker volume addons-922218
	I0809 18:39:38.352601  824462 cli_runner.go:164] Run: docker run --rm --name addons-922218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-922218 --entrypoint /usr/bin/test -v addons-922218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 18:39:42.039268  824462 cli_runner.go:217] Completed: docker run --rm --name addons-922218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-922218 --entrypoint /usr/bin/test -v addons-922218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib: (3.686607337s)
	I0809 18:39:42.039306  824462 oci.go:107] Successfully prepared a docker volume addons-922218
	I0809 18:39:42.039337  824462 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:39:42.039362  824462 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 18:39:42.039416  824462 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-922218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 18:39:46.906173  824462 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-922218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.866681816s)
	I0809 18:39:46.906208  824462 kic.go:199] duration metric: took 4.866843 seconds to extract preloaded images to volume
	W0809 18:39:46.906341  824462 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0809 18:39:46.906434  824462 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0809 18:39:46.959056  824462 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-922218 --name addons-922218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-922218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-922218 --network addons-922218 --ip 192.168.49.2 --volume addons-922218:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 18:39:47.250424  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Running}}
	I0809 18:39:47.267377  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:39:47.285019  824462 cli_runner.go:164] Run: docker exec addons-922218 stat /var/lib/dpkg/alternatives/iptables
	I0809 18:39:47.353387  824462 oci.go:144] the created container "addons-922218" has a running status.
	I0809 18:39:47.353418  824462 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa...
	I0809 18:39:47.440593  824462 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0809 18:39:47.460128  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:39:47.476661  824462 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0809 18:39:47.476685  824462 kic_runner.go:114] Args: [docker exec --privileged addons-922218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0809 18:39:47.545317  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:39:47.561927  824462 machine.go:88] provisioning docker machine ...
	I0809 18:39:47.561971  824462 ubuntu.go:169] provisioning hostname "addons-922218"
	I0809 18:39:47.562041  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:47.582453  824462 main.go:141] libmachine: Using SSH client type: native
	I0809 18:39:47.582878  824462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0809 18:39:47.582894  824462 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-922218 && echo "addons-922218" | sudo tee /etc/hostname
	I0809 18:39:47.583606  824462 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50428->127.0.0.1:33407: read: connection reset by peer
	I0809 18:39:50.734341  824462 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-922218
	
	I0809 18:39:50.734412  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:50.750832  824462 main.go:141] libmachine: Using SSH client type: native
	I0809 18:39:50.751224  824462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0809 18:39:50.751241  824462 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-922218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-922218/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-922218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 18:39:50.883825  824462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 18:39:50.883865  824462 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 18:39:50.883894  824462 ubuntu.go:177] setting up certificates
	I0809 18:39:50.883909  824462 provision.go:83] configureAuth start
	I0809 18:39:50.883964  824462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-922218
	I0809 18:39:50.899779  824462 provision.go:138] copyHostCerts
	I0809 18:39:50.899879  824462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 18:39:50.900010  824462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 18:39:50.900113  824462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 18:39:50.900188  824462 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.addons-922218 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-922218]
	I0809 18:39:51.051760  824462 provision.go:172] copyRemoteCerts
	I0809 18:39:51.051819  824462 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 18:39:51.051875  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.068797  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:39:51.163810  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 18:39:51.185047  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0809 18:39:51.205402  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 18:39:51.225913  824462 provision.go:86] duration metric: configureAuth took 341.988348ms
	I0809 18:39:51.225937  824462 ubuntu.go:193] setting minikube options for container-runtime
	I0809 18:39:51.226121  824462 config.go:182] Loaded profile config "addons-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:39:51.226219  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.242833  824462 main.go:141] libmachine: Using SSH client type: native
	I0809 18:39:51.243268  824462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33407 <nil> <nil>}
	I0809 18:39:51.243287  824462 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 18:39:51.462369  824462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 18:39:51.462407  824462 machine.go:91] provisioned docker machine in 3.900444528s
	I0809 18:39:51.462420  824462 client.go:171] LocalClient.Create took 13.798236677s
	I0809 18:39:51.462445  824462 start.go:167] duration metric: libmachine.API.Create for "addons-922218" took 13.798311146s
	I0809 18:39:51.462459  824462 start.go:300] post-start starting for "addons-922218" (driver="docker")
	I0809 18:39:51.462476  824462 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 18:39:51.462550  824462 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 18:39:51.462606  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.478599  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:39:51.576643  824462 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 18:39:51.579832  824462 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 18:39:51.579869  824462 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 18:39:51.579877  824462 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 18:39:51.579885  824462 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 18:39:51.579895  824462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 18:39:51.579950  824462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 18:39:51.579973  824462 start.go:303] post-start completed in 117.503164ms
	I0809 18:39:51.580273  824462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-922218
	I0809 18:39:51.596833  824462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/config.json ...
	I0809 18:39:51.597145  824462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 18:39:51.597210  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.613397  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:39:51.704491  824462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 18:39:51.708549  824462 start.go:128] duration metric: createHost completed in 14.120540014s
	I0809 18:39:51.708579  824462 start.go:83] releasing machines lock for "addons-922218", held for 14.120739513s
	I0809 18:39:51.708652  824462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-922218
	I0809 18:39:51.725129  824462 ssh_runner.go:195] Run: cat /version.json
	I0809 18:39:51.725180  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.725206  824462 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 18:39:51.725281  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:39:51.743248  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:39:51.743425  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:39:51.835660  824462 ssh_runner.go:195] Run: systemctl --version
	I0809 18:39:51.921210  824462 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 18:39:52.058874  824462 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 18:39:52.063207  824462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:39:52.081280  824462 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 18:39:52.081361  824462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:39:52.107573  824462 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0809 18:39:52.107606  824462 start.go:466] detecting cgroup driver to use...
	I0809 18:39:52.107711  824462 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 18:39:52.107776  824462 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 18:39:52.122195  824462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 18:39:52.132192  824462 docker.go:196] disabling cri-docker service (if available) ...
	I0809 18:39:52.132249  824462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 18:39:52.144354  824462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 18:39:52.157019  824462 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 18:39:52.232940  824462 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 18:39:52.309849  824462 docker.go:212] disabling docker service ...
	I0809 18:39:52.309918  824462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 18:39:52.328433  824462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 18:39:52.338959  824462 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 18:39:52.421336  824462 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 18:39:52.504648  824462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 18:39:52.515365  824462 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 18:39:52.529935  824462 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0809 18:39:52.530013  824462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:39:52.538965  824462 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 18:39:52.539039  824462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:39:52.547899  824462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:39:52.556656  824462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:39:52.565668  824462 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 18:39:52.573749  824462 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 18:39:52.581297  824462 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 18:39:52.588722  824462 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 18:39:52.660152  824462 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 18:39:52.750059  824462 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 18:39:52.750144  824462 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 18:39:52.753751  824462 start.go:534] Will wait 60s for crictl version
	I0809 18:39:52.753818  824462 ssh_runner.go:195] Run: which crictl
	I0809 18:39:52.756817  824462 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 18:39:52.792081  824462 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 18:39:52.792166  824462 ssh_runner.go:195] Run: crio --version
	I0809 18:39:52.826477  824462 ssh_runner.go:195] Run: crio --version
	I0809 18:39:52.862510  824462 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0809 18:39:52.863905  824462 cli_runner.go:164] Run: docker network inspect addons-922218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:39:52.880696  824462 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0809 18:39:52.884118  824462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:39:52.894155  824462 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:39:52.894212  824462 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:39:52.944348  824462 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 18:39:52.944368  824462 crio.go:415] Images already preloaded, skipping extraction
	I0809 18:39:52.944411  824462 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:39:52.976351  824462 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 18:39:52.976372  824462 cache_images.go:84] Images are preloaded, skipping loading
	I0809 18:39:52.976429  824462 ssh_runner.go:195] Run: crio config
	I0809 18:39:53.016576  824462 cni.go:84] Creating CNI manager for ""
	I0809 18:39:53.016602  824462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:39:53.016617  824462 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 18:39:53.016644  824462 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-922218 NodeName:addons-922218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 18:39:53.016812  824462 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-922218"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 18:39:53.016904  824462 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-922218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-922218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 18:39:53.016968  824462 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 18:39:53.025272  824462 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 18:39:53.025338  824462 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 18:39:53.033077  824462 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0809 18:39:53.048606  824462 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 18:39:53.064181  824462 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0809 18:39:53.079584  824462 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0809 18:39:53.082718  824462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:39:53.092313  824462 certs.go:56] Setting up /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218 for IP: 192.168.49.2
	I0809 18:39:53.092344  824462 certs.go:190] acquiring lock for shared ca certs: {Name:mk19b72d6df3cc07014c8108931f9946a7850469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.092463  824462 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key
	I0809 18:39:53.297447  824462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt ...
	I0809 18:39:53.297480  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt: {Name:mk79f9e36b3f65b82a562ceb8b67555fa31c66a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.297660  824462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key ...
	I0809 18:39:53.297670  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key: {Name:mke90a4335837c3f264f87bbec64cb738260bfd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.297742  824462 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key
	I0809 18:39:53.386945  824462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt ...
	I0809 18:39:53.386978  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt: {Name:mk276ca04afe842a60b4374bfb3e8253f71aee71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.387148  824462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key ...
	I0809 18:39:53.387159  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key: {Name:mkdca8ab16b71408653cf21acb591ddc114e59f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.387258  824462 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.key
	I0809 18:39:53.387271  824462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt with IP's: []
	I0809 18:39:53.449737  824462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt ...
	I0809 18:39:53.449774  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: {Name:mk188bc765fafb4ca018d40e71020be9fdaf2177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.449938  824462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.key ...
	I0809 18:39:53.449949  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.key: {Name:mk35dabfb8e0f099cd470ad180d4f166e3c38cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.450015  824462 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key.dd3b5fb2
	I0809 18:39:53.450032  824462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 18:39:53.905386  824462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt.dd3b5fb2 ...
	I0809 18:39:53.905426  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt.dd3b5fb2: {Name:mk0d636dde19afd8e04e13a97a0b2677a0232a0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.905637  824462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key.dd3b5fb2 ...
	I0809 18:39:53.905655  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key.dd3b5fb2: {Name:mk82b94b83638d7b29f7ab848119c29ed7495a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.905759  824462 certs.go:337] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt
	I0809 18:39:53.905846  824462 certs.go:341] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key
	I0809 18:39:53.905902  824462 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.key
	I0809 18:39:53.905924  824462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.crt with IP's: []
	I0809 18:39:53.976780  824462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.crt ...
	I0809 18:39:53.976815  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.crt: {Name:mkb338e34157539e57a285de1bbe8c107c0e64b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.977006  824462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.key ...
	I0809 18:39:53.977025  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.key: {Name:mk8e6184407713a78344c3934be59b8b0a35c80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:39:53.977251  824462 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem (1675 bytes)
	I0809 18:39:53.977300  824462 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem (1082 bytes)
	I0809 18:39:53.977338  824462 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem (1123 bytes)
	I0809 18:39:53.977368  824462 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem (1679 bytes)
	I0809 18:39:53.977954  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 18:39:54.000542  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0809 18:39:54.023378  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 18:39:54.044880  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0809 18:39:54.066473  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 18:39:54.088187  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0809 18:39:54.109716  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 18:39:54.131207  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0809 18:39:54.153083  824462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 18:39:54.175415  824462 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 18:39:54.190934  824462 ssh_runner.go:195] Run: openssl version
	I0809 18:39:54.195950  824462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 18:39:54.204612  824462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:39:54.207909  824462 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:39:54.207972  824462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:39:54.214424  824462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 18:39:54.222851  824462 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 18:39:54.225987  824462 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:39:54.226042  824462 kubeadm.go:404] StartCluster: {Name:addons-922218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-922218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:54.226130  824462 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0809 18:39:54.226181  824462 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0809 18:39:54.259321  824462 cri.go:89] found id: ""
	I0809 18:39:54.259391  824462 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 18:39:54.267884  824462 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 18:39:54.275951  824462 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0809 18:39:54.276009  824462 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 18:39:54.283765  824462 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 18:39:54.283815  824462 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0809 18:39:54.362664  824462 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0809 18:39:54.425753  824462 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 18:40:04.160379  824462 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0809 18:40:04.160472  824462 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 18:40:04.160607  824462 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0809 18:40:04.160687  824462 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0809 18:40:04.160738  824462 kubeadm.go:322] OS: Linux
	I0809 18:40:04.160798  824462 kubeadm.go:322] CGROUPS_CPU: enabled
	I0809 18:40:04.160880  824462 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0809 18:40:04.160927  824462 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0809 18:40:04.160969  824462 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0809 18:40:04.161037  824462 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0809 18:40:04.161105  824462 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0809 18:40:04.161144  824462 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0809 18:40:04.161193  824462 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0809 18:40:04.161244  824462 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0809 18:40:04.161332  824462 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 18:40:04.161458  824462 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 18:40:04.161590  824462 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 18:40:04.161665  824462 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 18:40:04.163286  824462 out.go:204]   - Generating certificates and keys ...
	I0809 18:40:04.163378  824462 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 18:40:04.163456  824462 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 18:40:04.163543  824462 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 18:40:04.163619  824462 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 18:40:04.163732  824462 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 18:40:04.163803  824462 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 18:40:04.163878  824462 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 18:40:04.164026  824462 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-922218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0809 18:40:04.164092  824462 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 18:40:04.164243  824462 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-922218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0809 18:40:04.164329  824462 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 18:40:04.164402  824462 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 18:40:04.164459  824462 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 18:40:04.164528  824462 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 18:40:04.164588  824462 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 18:40:04.164653  824462 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 18:40:04.164771  824462 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 18:40:04.164851  824462 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 18:40:04.164993  824462 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 18:40:04.165103  824462 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 18:40:04.165169  824462 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 18:40:04.165242  824462 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 18:40:04.167610  824462 out.go:204]   - Booting up control plane ...
	I0809 18:40:04.167793  824462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 18:40:04.167901  824462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 18:40:04.168007  824462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 18:40:04.168132  824462 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 18:40:04.168372  824462 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 18:40:04.168503  824462 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502457 seconds
	I0809 18:40:04.168649  824462 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 18:40:04.168837  824462 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 18:40:04.168925  824462 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 18:40:04.169171  824462 kubeadm.go:322] [mark-control-plane] Marking the node addons-922218 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0809 18:40:04.169255  824462 kubeadm.go:322] [bootstrap-token] Using token: c1c5zs.okxv0rvrpurc62be
	I0809 18:40:04.170812  824462 out.go:204]   - Configuring RBAC rules ...
	I0809 18:40:04.170954  824462 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 18:40:04.171048  824462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 18:40:04.171210  824462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 18:40:04.171325  824462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 18:40:04.171433  824462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 18:40:04.171570  824462 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 18:40:04.171739  824462 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 18:40:04.171777  824462 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 18:40:04.171816  824462 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 18:40:04.171822  824462 kubeadm.go:322] 
	I0809 18:40:04.171870  824462 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 18:40:04.171875  824462 kubeadm.go:322] 
	I0809 18:40:04.171958  824462 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 18:40:04.171969  824462 kubeadm.go:322] 
	I0809 18:40:04.172009  824462 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 18:40:04.172084  824462 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 18:40:04.172166  824462 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 18:40:04.172178  824462 kubeadm.go:322] 
	I0809 18:40:04.172251  824462 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0809 18:40:04.172258  824462 kubeadm.go:322] 
	I0809 18:40:04.172314  824462 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0809 18:40:04.172321  824462 kubeadm.go:322] 
	I0809 18:40:04.172395  824462 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 18:40:04.172503  824462 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 18:40:04.172601  824462 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 18:40:04.172608  824462 kubeadm.go:322] 
	I0809 18:40:04.172674  824462 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 18:40:04.172743  824462 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 18:40:04.172753  824462 kubeadm.go:322] 
	I0809 18:40:04.172843  824462 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token c1c5zs.okxv0rvrpurc62be \
	I0809 18:40:04.172942  824462 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c \
	I0809 18:40:04.172978  824462 kubeadm.go:322] 	--control-plane 
	I0809 18:40:04.172987  824462 kubeadm.go:322] 
	I0809 18:40:04.173097  824462 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 18:40:04.173109  824462 kubeadm.go:322] 
	I0809 18:40:04.173225  824462 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token c1c5zs.okxv0rvrpurc62be \
	I0809 18:40:04.173390  824462 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c 
	I0809 18:40:04.173405  824462 cni.go:84] Creating CNI manager for ""
	I0809 18:40:04.173413  824462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:40:04.175043  824462 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0809 18:40:04.176309  824462 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0809 18:40:04.180107  824462 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0809 18:40:04.180126  824462 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0809 18:40:04.196978  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0809 18:40:04.859993  824462 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 18:40:04.860069  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=addons-922218 minikube.k8s.io/updated_at=2023_08_09T18_40_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:04.860070  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:04.867029  824462 ops.go:34] apiserver oom_adj: -16
	I0809 18:40:04.978217  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:05.045337  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:05.614646  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:06.114613  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:06.614721  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:07.114247  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:07.614820  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:08.114390  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:08.614790  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:09.114203  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:09.614246  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:10.115019  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:10.614792  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:11.114606  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:11.614478  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:12.114639  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:12.614731  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:13.115027  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:13.614820  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:14.114921  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:14.615050  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:15.114474  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:15.614683  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:16.114253  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:16.614950  824462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:40:16.681191  824462 kubeadm.go:1081] duration metric: took 11.821182694s to wait for elevateKubeSystemPrivileges.
	I0809 18:40:16.681232  824462 kubeadm.go:406] StartCluster complete in 22.455196379s
	I0809 18:40:16.681261  824462 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:40:16.681391  824462 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:40:16.681899  824462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:40:16.682115  824462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 18:40:16.682260  824462 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0809 18:40:16.682400  824462 config.go:182] Loaded profile config "addons-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:40:16.682426  824462 addons.go:69] Setting volumesnapshots=true in profile "addons-922218"
	I0809 18:40:16.682433  824462 addons.go:69] Setting helm-tiller=true in profile "addons-922218"
	I0809 18:40:16.682449  824462 addons.go:69] Setting inspektor-gadget=true in profile "addons-922218"
	I0809 18:40:16.682462  824462 addons.go:231] Setting addon volumesnapshots=true in "addons-922218"
	I0809 18:40:16.682465  824462 addons.go:231] Setting addon helm-tiller=true in "addons-922218"
	I0809 18:40:16.682472  824462 addons.go:69] Setting ingress=true in profile "addons-922218"
	I0809 18:40:16.682488  824462 addons.go:231] Setting addon ingress=true in "addons-922218"
	I0809 18:40:16.682525  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.682540  824462 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-922218"
	I0809 18:40:16.682539  824462 addons.go:69] Setting ingress-dns=true in profile "addons-922218"
	I0809 18:40:16.682580  824462 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-922218"
	I0809 18:40:16.682592  824462 addons.go:231] Setting addon ingress-dns=true in "addons-922218"
	I0809 18:40:16.682595  824462 addons.go:69] Setting registry=true in profile "addons-922218"
	I0809 18:40:16.682619  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.682621  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.682641  824462 addons.go:231] Setting addon registry=true in "addons-922218"
	I0809 18:40:16.682746  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.683059  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.683073  824462 addons.go:69] Setting storage-provisioner=true in profile "addons-922218"
	I0809 18:40:16.683085  824462 addons.go:231] Setting addon storage-provisioner=true in "addons-922218"
	I0809 18:40:16.683114  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.683122  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.682534  824462 addons.go:69] Setting cloud-spanner=true in profile "addons-922218"
	I0809 18:40:16.683220  824462 addons.go:231] Setting addon cloud-spanner=true in "addons-922218"
	I0809 18:40:16.683228  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.683257  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.683346  824462 addons.go:69] Setting metrics-server=true in profile "addons-922218"
	I0809 18:40:16.683364  824462 addons.go:231] Setting addon metrics-server=true in "addons-922218"
	I0809 18:40:16.683401  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.683511  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.683680  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.682526  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.682462  824462 addons.go:231] Setting addon inspektor-gadget=true in "addons-922218"
	I0809 18:40:16.683807  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.684101  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.684189  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.684264  824462 addons.go:69] Setting default-storageclass=true in profile "addons-922218"
	I0809 18:40:16.684278  824462 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-922218"
	I0809 18:40:16.684486  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.684546  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.683059  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.684631  824462 addons.go:69] Setting gcp-auth=true in profile "addons-922218"
	I0809 18:40:16.684649  824462 mustload.go:65] Loading cluster: addons-922218
	I0809 18:40:16.684823  824462 config.go:182] Loaded profile config "addons-922218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:40:16.685043  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.682530  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.687603  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.713119  824462 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0809 18:40:16.714608  824462 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0809 18:40:16.714631  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0809 18:40:16.714703  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.724330  824462 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.8
	I0809 18:40:16.726933  824462 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0809 18:40:16.726773  824462 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0809 18:40:16.726871  824462 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:40:16.726909  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0809 18:40:16.730518  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0809 18:40:16.730539  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0809 18:40:16.728594  824462 out.go:177]   - Using image docker.io/registry:2.8.1
	I0809 18:40:16.731912  824462 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:40:16.733918  824462 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0809 18:40:16.733943  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0809 18:40:16.730606  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.734013  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.735707  824462 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0809 18:40:16.731927  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 18:40:16.728921  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0809 18:40:16.734966  824462 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-922218" context rescaled to 1 replicas
	I0809 18:40:16.737148  824462 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:40:16.738600  824462 out.go:177] * Verifying Kubernetes components...
	I0809 18:40:16.737294  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.737381  824462 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0809 18:40:16.737415  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.740088  824462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:40:16.740201  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0809 18:40:16.740254  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.775893  824462 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0809 18:40:16.774632  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.781182  824462 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0809 18:40:16.784679  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0809 18:40:16.784836  824462 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0809 18:40:16.784851  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0809 18:40:16.786354  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.786534  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0809 18:40:16.788603  824462 addons.go:231] Setting addon default-storageclass=true in "addons-922218"
	I0809 18:40:16.788637  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:16.790068  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:16.790832  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0809 18:40:16.790990  824462 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0809 18:40:16.792322  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0809 18:40:16.792383  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.792473  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0809 18:40:16.793837  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0809 18:40:16.795176  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0809 18:40:16.796509  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0809 18:40:16.796103  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.799968  824462 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0809 18:40:16.801347  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0809 18:40:16.801369  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0809 18:40:16.801429  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.802588  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.807206  824462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 18:40:16.810430  824462 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0809 18:40:16.808935  824462 node_ready.go:35] waiting up to 6m0s for node "addons-922218" to be "Ready" ...
	I0809 18:40:16.813974  824462 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0809 18:40:16.815932  824462 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0809 18:40:16.817645  824462 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0809 18:40:16.817664  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0809 18:40:16.817790  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.816744  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.820874  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.821893  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.828315  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.831368  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.834613  824462 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 18:40:16.834632  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 18:40:16.834673  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:16.844369  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.844512  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.844540  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:16.852744  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	W0809 18:40:16.859011  824462 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0809 18:40:16.859047  824462 retry.go:31] will retry after 236.493977ms: ssh: handshake failed: EOF
	I0809 18:40:17.158735  824462 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0809 18:40:17.158766  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0809 18:40:17.163178  824462 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0809 18:40:17.163206  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0809 18:40:17.267955  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:40:17.271897  824462 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0809 18:40:17.271987  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0809 18:40:17.277158  824462 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0809 18:40:17.277240  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0809 18:40:17.278310  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0809 18:40:17.278331  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0809 18:40:17.356219  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0809 18:40:17.356734  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0809 18:40:17.366463  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0809 18:40:17.366984  824462 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0809 18:40:17.367006  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0809 18:40:17.456861  824462 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0809 18:40:17.456950  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0809 18:40:17.459065  824462 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0809 18:40:17.459098  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0809 18:40:17.463192  824462 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0809 18:40:17.463224  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0809 18:40:17.472723  824462 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0809 18:40:17.472757  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0809 18:40:17.557550  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0809 18:40:17.557589  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0809 18:40:17.572787  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 18:40:17.662342  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0809 18:40:17.669137  824462 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0809 18:40:17.669169  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0809 18:40:17.672519  824462 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0809 18:40:17.672561  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0809 18:40:17.755166  824462 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0809 18:40:17.755264  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0809 18:40:17.755963  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0809 18:40:17.759125  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0809 18:40:17.759209  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0809 18:40:17.873212  824462 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0809 18:40:17.873302  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0809 18:40:17.962171  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0809 18:40:17.970512  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0809 18:40:17.970588  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0809 18:40:18.174530  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0809 18:40:18.174625  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0809 18:40:18.259202  824462 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0809 18:40:18.259231  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0809 18:40:18.267038  824462 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0809 18:40:18.267073  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0809 18:40:18.471468  824462 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0809 18:40:18.471503  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0809 18:40:18.573735  824462 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0809 18:40:18.573767  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0809 18:40:18.660668  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0809 18:40:18.676650  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0809 18:40:18.676684  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0809 18:40:18.773214  824462 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.96596157s)
	I0809 18:40:18.773256  824462 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0809 18:40:18.874461  824462 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0809 18:40:18.874559  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0809 18:40:18.974144  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:19.166524  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0809 18:40:19.166617  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0809 18:40:19.177168  824462 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0809 18:40:19.177245  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0809 18:40:19.559081  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0809 18:40:19.766794  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0809 18:40:19.766911  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0809 18:40:20.056107  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0809 18:40:20.056194  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0809 18:40:20.366748  824462 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0809 18:40:20.366841  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0809 18:40:20.655212  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0809 18:40:21.070572  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:22.061073  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.793067537s)
	I0809 18:40:22.061241  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.704418488s)
	I0809 18:40:23.269648  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.913315699s)
	I0809 18:40:23.269681  824462 addons.go:467] Verifying addon ingress=true in "addons-922218"
	I0809 18:40:23.271361  824462 out.go:177] * Verifying ingress addon...
	I0809 18:40:23.269777  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.903279254s)
	I0809 18:40:23.269864  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.697035079s)
	I0809 18:40:23.269922  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.607544957s)
	I0809 18:40:23.269965  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.513918763s)
	I0809 18:40:23.270009  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.307763926s)
	I0809 18:40:23.270157  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.609442975s)
	I0809 18:40:23.270244  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.711118912s)
	I0809 18:40:23.272794  824462 addons.go:467] Verifying addon registry=true in "addons-922218"
	W0809 18:40:23.272861  824462 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0809 18:40:23.274347  824462 out.go:177] * Verifying registry addon...
	I0809 18:40:23.272890  824462 retry.go:31] will retry after 130.453342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0809 18:40:23.272801  824462 addons.go:467] Verifying addon metrics-server=true in "addons-922218"
	I0809 18:40:23.273572  824462 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0809 18:40:23.276264  824462 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0809 18:40:23.282099  824462 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0809 18:40:23.282116  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:23.282301  824462 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0809 18:40:23.282319  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:23.285494  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:23.285645  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:23.406371  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0809 18:40:23.473658  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:23.587602  824462 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0809 18:40:23.587712  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:23.606250  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:23.775015  824462 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0809 18:40:23.790099  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:23.790391  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:23.796071  824462 addons.go:231] Setting addon gcp-auth=true in "addons-922218"
	I0809 18:40:23.796139  824462 host.go:66] Checking if "addons-922218" exists ...
	I0809 18:40:23.796657  824462 cli_runner.go:164] Run: docker container inspect addons-922218 --format={{.State.Status}}
	I0809 18:40:23.814701  824462 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0809 18:40:23.814751  824462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-922218
	I0809 18:40:23.830402  824462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33407 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/addons-922218/id_rsa Username:docker}
	I0809 18:40:24.082378  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.427040253s)
	I0809 18:40:24.082423  824462 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-922218"
	I0809 18:40:24.084221  824462 out.go:177] * Verifying csi-hostpath-driver addon...
	I0809 18:40:24.086741  824462 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0809 18:40:24.159122  824462 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0809 18:40:24.159146  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:24.163435  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:24.290031  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:24.290396  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:24.669473  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:24.856468  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:24.856894  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:25.169353  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:25.359794  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:25.361637  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:25.475590  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:25.676676  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:25.865416  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:25.866426  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:26.055059  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.648565812s)
	I0809 18:40:26.055144  824462 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.240417594s)
	I0809 18:40:26.057699  824462 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0809 18:40:26.059299  824462 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0809 18:40:26.060601  824462 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0809 18:40:26.060631  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0809 18:40:26.169382  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:26.169539  824462 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0809 18:40:26.169558  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0809 18:40:26.272544  824462 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0809 18:40:26.272572  824462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0809 18:40:26.360626  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:26.361753  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:26.371015  824462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0809 18:40:26.668899  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:26.859953  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:26.860876  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:27.168762  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:27.359676  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:27.360594  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:27.669687  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:27.858873  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:27.858988  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:27.973308  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:28.169681  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:28.360284  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:28.360951  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:28.670174  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:28.868017  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:28.869989  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:28.976448  824462 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.60534746s)
	I0809 18:40:28.977414  824462 addons.go:467] Verifying addon gcp-auth=true in "addons-922218"
	I0809 18:40:28.979165  824462 out.go:177] * Verifying gcp-auth addon...
	I0809 18:40:28.981586  824462 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0809 18:40:29.055791  824462 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0809 18:40:29.055820  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:29.059527  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:29.169744  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:29.290318  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:29.290542  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:29.564102  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:29.668018  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:29.790249  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:29.790353  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:29.973514  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:30.064080  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:30.168676  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:30.291758  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:30.292321  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:30.563866  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:30.668604  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:30.790953  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:30.791149  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:31.063560  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:31.169070  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:31.289874  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:31.290169  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:31.563129  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:31.668343  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:31.790039  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:31.790185  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:32.063601  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:32.168899  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:32.289747  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:32.289819  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:32.473230  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:32.563400  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:32.668483  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:32.790287  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:32.790655  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:33.063849  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:33.167871  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:33.290334  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:33.290933  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:33.564047  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:33.668907  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:33.791049  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:33.791405  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:34.063890  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:34.168701  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:34.290803  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:34.290895  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:34.563176  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:34.668599  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:34.790742  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:34.790790  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:34.973301  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:35.063377  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:35.168061  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:35.290064  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:35.290107  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:35.563131  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:35.668850  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:35.790369  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:35.790479  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:36.063951  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:36.167779  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:36.290477  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:36.290542  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:36.563426  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:36.668504  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:36.790593  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:36.790613  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:37.063883  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:37.168050  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:37.290144  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:37.290309  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:37.473120  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:37.563475  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:37.668133  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:37.789947  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:37.789956  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:38.064024  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:38.168032  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:38.289462  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:38.289832  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:38.563651  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:38.668126  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:38.790076  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:38.790104  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:39.063153  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:39.168128  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:39.289741  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:39.289756  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:39.562994  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:39.667853  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:39.790481  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:39.790498  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:39.973096  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:40.063386  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:40.168116  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:40.289718  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:40.289910  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:40.562939  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:40.669215  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:40.790578  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:40.790714  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:41.063675  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:41.168459  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:41.289721  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:41.289814  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:41.563148  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:41.668341  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:41.789951  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:41.790166  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:41.973368  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:42.064204  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:42.168000  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:42.290381  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:42.290706  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:42.563432  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:42.668233  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:42.789708  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:42.789831  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:43.063659  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:43.168571  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:43.289978  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:43.290293  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:43.562966  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:43.668216  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:43.789557  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:43.789895  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:44.063853  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:44.168088  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:44.289525  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:44.289811  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:44.472542  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:44.563751  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:44.667831  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:44.790624  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:44.790901  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:45.063586  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:45.168366  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:45.289999  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:45.290059  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:45.563403  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:45.668179  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:45.789567  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:45.789784  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:46.063996  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:46.167663  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:46.290475  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:46.290899  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:46.473253  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:46.563399  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:46.668354  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:46.790975  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:46.791123  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:47.062906  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:47.167885  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:47.289284  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:47.289516  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:47.563071  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:47.667881  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:47.789543  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:47.789645  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:48.064100  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:48.167947  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:48.289365  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:48.289638  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:48.473481  824462 node_ready.go:58] node "addons-922218" has status "Ready":"False"
	I0809 18:40:48.563535  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:48.668219  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:48.789783  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:48.789959  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:49.062783  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:49.168707  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:49.290521  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:49.290890  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:49.563384  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:49.668074  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:49.789595  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:49.789826  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:50.063100  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:50.168165  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:50.290441  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:50.290870  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:50.474601  824462 node_ready.go:49] node "addons-922218" has status "Ready":"True"
	I0809 18:40:50.474632  824462 node_ready.go:38] duration metric: took 33.662635579s waiting for node "addons-922218" to be "Ready" ...
	I0809 18:40:50.474643  824462 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:40:50.483840  824462 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-f9mtp" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:50.564594  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:50.677571  824462 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0809 18:40:50.677600  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:50.791500  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:50.791512  824462 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0809 18:40:50.791533  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:51.064081  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:51.171871  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:51.291306  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:51.291424  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:51.566787  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:51.671660  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:51.862852  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:51.863926  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:52.062904  824462 pod_ready.go:92] pod "coredns-5d78c9869d-f9mtp" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.063001  824462 pod_ready.go:81] duration metric: took 1.579125257s waiting for pod "coredns-5d78c9869d-f9mtp" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.063062  824462 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.065703  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:52.070325  824462 pod_ready.go:92] pod "etcd-addons-922218" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.070356  824462 pod_ready.go:81] duration metric: took 7.271578ms waiting for pod "etcd-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.070374  824462 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.077076  824462 pod_ready.go:92] pod "kube-apiserver-addons-922218" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.077148  824462 pod_ready.go:81] duration metric: took 6.764558ms waiting for pod "kube-apiserver-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.077171  824462 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.082587  824462 pod_ready.go:92] pod "kube-controller-manager-addons-922218" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.082613  824462 pod_ready.go:81] duration metric: took 5.42592ms waiting for pod "kube-controller-manager-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.082631  824462 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sn4cp" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.170425  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:52.291204  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:52.292017  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:52.475436  824462 pod_ready.go:92] pod "kube-proxy-sn4cp" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.475461  824462 pod_ready.go:81] duration metric: took 392.823642ms waiting for pod "kube-proxy-sn4cp" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.475471  824462 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.563508  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:52.670022  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:52.790902  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:52.791697  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:52.874164  824462 pod_ready.go:92] pod "kube-scheduler-addons-922218" in "kube-system" namespace has status "Ready":"True"
	I0809 18:40:52.874192  824462 pod_ready.go:81] duration metric: took 398.714318ms waiting for pod "kube-scheduler-addons-922218" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:52.874206  824462 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace to be "Ready" ...
	I0809 18:40:53.063801  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:53.172628  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:53.289925  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:53.290041  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:53.564025  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:53.670222  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:53.791343  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:53.791519  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:54.064034  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:54.172541  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:54.290667  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:54.290738  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:54.564478  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:54.670278  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:54.790438  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:54.790620  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:55.064001  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:55.170420  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:55.180034  824462 pod_ready.go:102] pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace has status "Ready":"False"
	I0809 18:40:55.290018  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:55.290332  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:55.563162  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:55.670027  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:55.790636  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:55.791076  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:56.064075  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:56.170284  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:56.292389  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:56.292514  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:56.563745  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:56.669420  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:56.791765  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:56.791776  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:57.064570  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:57.168947  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:57.290904  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:57.290912  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:57.562931  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:57.669197  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:57.680038  824462 pod_ready.go:102] pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace has status "Ready":"False"
	I0809 18:40:57.790781  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:57.791066  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:58.065212  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:58.169985  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:58.367673  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:58.369072  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:58.564324  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:58.669329  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:58.792132  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:58.793405  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:59.063274  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:59.169593  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:59.290449  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:59.290463  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:40:59.563848  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:40:59.669401  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:40:59.680476  824462 pod_ready.go:102] pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace has status "Ready":"False"
	I0809 18:40:59.791622  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:40:59.791776  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:00.063576  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:00.169023  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:00.290668  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:00.290777  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:00.564073  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:00.669454  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:00.790292  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:00.791036  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:01.064357  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:01.169708  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:01.290594  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:01.291258  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:01.564020  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:01.668671  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:01.790736  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:01.791358  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:02.064517  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:02.169956  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:02.181048  824462 pod_ready.go:102] pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace has status "Ready":"False"
	I0809 18:41:02.290464  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:02.291275  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:02.562991  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:02.669119  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:02.791351  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:02.791826  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:03.064836  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:03.170281  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:03.294850  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:03.295368  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:03.563901  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:03.668713  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:03.790690  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:03.790946  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:04.064386  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:04.170030  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:04.291229  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:04.291347  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:04.563310  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:04.768352  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:04.768458  824462 pod_ready.go:92] pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace has status "Ready":"True"
	I0809 18:41:04.768484  824462 pod_ready.go:81] duration metric: took 11.89426889s waiting for pod "metrics-server-7746886d4f-wthr7" in "kube-system" namespace to be "Ready" ...
	I0809 18:41:04.768517  824462 pod_ready.go:38] duration metric: took 14.293858552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:41:04.768546  824462 api_server.go:52] waiting for apiserver process to appear ...
	I0809 18:41:04.768612  824462 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 18:41:04.786254  824462 api_server.go:72] duration metric: took 48.049058242s to wait for apiserver process to appear ...
	I0809 18:41:04.786280  824462 api_server.go:88] waiting for apiserver healthz status ...
	I0809 18:41:04.786301  824462 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0809 18:41:04.791410  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:04.856186  824462 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0809 18:41:04.857739  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:04.862947  824462 api_server.go:141] control plane version: v1.27.4
	I0809 18:41:04.862973  824462 api_server.go:131] duration metric: took 76.68581ms to wait for apiserver health ...
	I0809 18:41:04.862984  824462 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 18:41:04.884321  824462 system_pods.go:59] 18 kube-system pods found
	I0809 18:41:04.884364  824462 system_pods.go:61] "coredns-5d78c9869d-f9mtp" [b8effa52-9a2b-48d5-84c6-fa4164eed23a] Running
	I0809 18:41:04.884377  824462 system_pods.go:61] "csi-hostpath-attacher-0" [2fb68869-d79f-401c-afd2-7911aba44ca5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0809 18:41:04.884386  824462 system_pods.go:61] "csi-hostpath-resizer-0" [abf04eb5-1f60-4125-8ec0-24b20f77913e] Running
	I0809 18:41:04.884397  824462 system_pods.go:61] "csi-hostpathplugin-2fb4d" [bad04acf-0592-48fc-8e90-08bef39ea8b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0809 18:41:04.884406  824462 system_pods.go:61] "etcd-addons-922218" [73c0dc47-9b6c-4861-b03f-7b234d9713b7] Running
	I0809 18:41:04.884413  824462 system_pods.go:61] "kindnet-rl2vf" [c16ad2dc-bd51-4096-b644-92326d9380c1] Running
	I0809 18:41:04.884421  824462 system_pods.go:61] "kube-apiserver-addons-922218" [381332c8-d115-45dc-83e5-fc1554bc061e] Running
	I0809 18:41:04.884430  824462 system_pods.go:61] "kube-controller-manager-addons-922218" [dbc56465-b6f8-4f44-8db6-57909c317afa] Running
	I0809 18:41:04.884444  824462 system_pods.go:61] "kube-ingress-dns-minikube" [eb4544bb-4783-441c-a80f-38a756ea5b6e] Running
	I0809 18:41:04.884451  824462 system_pods.go:61] "kube-proxy-sn4cp" [d2b8df40-d4cf-4835-bd48-56d409febaf2] Running
	I0809 18:41:04.884460  824462 system_pods.go:61] "kube-scheduler-addons-922218" [7837c0ba-3dbd-4822-9096-0a0144689da8] Running
	I0809 18:41:04.884473  824462 system_pods.go:61] "metrics-server-7746886d4f-wthr7" [c9e9d783-2d9b-419b-b850-f0452b5d09b8] Running
	I0809 18:41:04.884483  824462 system_pods.go:61] "registry-proxy-tfhw2" [321dcadb-ef6a-4c90-9825-67bd7009204e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0809 18:41:04.884498  824462 system_pods.go:61] "registry-wbxwm" [c9a46261-ec6a-4774-a0f1-91725dcd00f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0809 18:41:04.884513  824462 system_pods.go:61] "snapshot-controller-75bbb956b9-r2rgf" [a60d5f29-d56f-42c6-96d1-59340e3ff2fb] Running
	I0809 18:41:04.884525  824462 system_pods.go:61] "snapshot-controller-75bbb956b9-rkqlx" [8ff20b88-ccf1-4728-bf91-0d4a863330d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0809 18:41:04.884533  824462 system_pods.go:61] "storage-provisioner" [a85a17d5-8365-4f6a-afd8-60a4c43f1568] Running
	I0809 18:41:04.884546  824462 system_pods.go:61] "tiller-deploy-6847666dc-dbvcr" [2bef95be-3c35-4dbf-99c8-04b23626ce95] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0809 18:41:04.884558  824462 system_pods.go:74] duration metric: took 21.565833ms to wait for pod list to return data ...
	I0809 18:41:04.884574  824462 default_sa.go:34] waiting for default service account to be created ...
	I0809 18:41:04.887427  824462 default_sa.go:45] found service account: "default"
	I0809 18:41:04.887454  824462 default_sa.go:55] duration metric: took 2.866311ms for default service account to be created ...
	I0809 18:41:04.887465  824462 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 18:41:04.897704  824462 system_pods.go:86] 18 kube-system pods found
	I0809 18:41:04.897732  824462 system_pods.go:89] "coredns-5d78c9869d-f9mtp" [b8effa52-9a2b-48d5-84c6-fa4164eed23a] Running
	I0809 18:41:04.897741  824462 system_pods.go:89] "csi-hostpath-attacher-0" [2fb68869-d79f-401c-afd2-7911aba44ca5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0809 18:41:04.897748  824462 system_pods.go:89] "csi-hostpath-resizer-0" [abf04eb5-1f60-4125-8ec0-24b20f77913e] Running
	I0809 18:41:04.897757  824462 system_pods.go:89] "csi-hostpathplugin-2fb4d" [bad04acf-0592-48fc-8e90-08bef39ea8b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0809 18:41:04.897763  824462 system_pods.go:89] "etcd-addons-922218" [73c0dc47-9b6c-4861-b03f-7b234d9713b7] Running
	I0809 18:41:04.897768  824462 system_pods.go:89] "kindnet-rl2vf" [c16ad2dc-bd51-4096-b644-92326d9380c1] Running
	I0809 18:41:04.897772  824462 system_pods.go:89] "kube-apiserver-addons-922218" [381332c8-d115-45dc-83e5-fc1554bc061e] Running
	I0809 18:41:04.897777  824462 system_pods.go:89] "kube-controller-manager-addons-922218" [dbc56465-b6f8-4f44-8db6-57909c317afa] Running
	I0809 18:41:04.897785  824462 system_pods.go:89] "kube-ingress-dns-minikube" [eb4544bb-4783-441c-a80f-38a756ea5b6e] Running
	I0809 18:41:04.897789  824462 system_pods.go:89] "kube-proxy-sn4cp" [d2b8df40-d4cf-4835-bd48-56d409febaf2] Running
	I0809 18:41:04.897794  824462 system_pods.go:89] "kube-scheduler-addons-922218" [7837c0ba-3dbd-4822-9096-0a0144689da8] Running
	I0809 18:41:04.897799  824462 system_pods.go:89] "metrics-server-7746886d4f-wthr7" [c9e9d783-2d9b-419b-b850-f0452b5d09b8] Running
	I0809 18:41:04.897805  824462 system_pods.go:89] "registry-proxy-tfhw2" [321dcadb-ef6a-4c90-9825-67bd7009204e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0809 18:41:04.897811  824462 system_pods.go:89] "registry-wbxwm" [c9a46261-ec6a-4774-a0f1-91725dcd00f5] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0809 18:41:04.897819  824462 system_pods.go:89] "snapshot-controller-75bbb956b9-r2rgf" [a60d5f29-d56f-42c6-96d1-59340e3ff2fb] Running
	I0809 18:41:04.897826  824462 system_pods.go:89] "snapshot-controller-75bbb956b9-rkqlx" [8ff20b88-ccf1-4728-bf91-0d4a863330d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0809 18:41:04.897833  824462 system_pods.go:89] "storage-provisioner" [a85a17d5-8365-4f6a-afd8-60a4c43f1568] Running
	I0809 18:41:04.897839  824462 system_pods.go:89] "tiller-deploy-6847666dc-dbvcr" [2bef95be-3c35-4dbf-99c8-04b23626ce95] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0809 18:41:04.897849  824462 system_pods.go:126] duration metric: took 10.379049ms to wait for k8s-apps to be running ...
	I0809 18:41:04.897859  824462 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 18:41:04.897903  824462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:41:04.970323  824462 system_svc.go:56] duration metric: took 72.452096ms WaitForService to wait for kubelet.
	I0809 18:41:04.970357  824462 kubeadm.go:581] duration metric: took 48.233167695s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 18:41:04.970389  824462 node_conditions.go:102] verifying NodePressure condition ...
	I0809 18:41:04.974279  824462 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 18:41:04.974363  824462 node_conditions.go:123] node cpu capacity is 8
	I0809 18:41:04.974395  824462 node_conditions.go:105] duration metric: took 3.999682ms to run NodePressure ...
	I0809 18:41:04.974420  824462 start.go:228] waiting for startup goroutines ...
	I0809 18:41:05.063324  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:05.170009  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:05.377905  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:05.377995  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:05.563408  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:05.669207  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:05.876910  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:05.877711  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:06.063887  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:06.169065  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:06.290044  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:06.290517  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:06.563559  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:06.669449  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:06.790131  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:06.790307  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:07.063537  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:07.168915  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:07.292860  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:07.294140  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:07.563594  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:07.669214  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:07.790937  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:07.791106  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:08.063712  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:08.168618  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:08.290499  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:08.290529  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:08.564212  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:08.670544  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:08.790540  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:08.790562  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:09.063954  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:09.168738  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:09.290228  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:09.290451  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:09.563544  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:09.671468  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:09.791124  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:09.791304  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:10.063741  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:10.169496  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:10.292161  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:10.292305  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:10.563288  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:10.670407  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:10.790575  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:10.790825  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:11.063862  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:11.170488  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:11.291083  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:11.291422  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:11.563443  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:11.674188  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:11.797646  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:11.856454  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:12.064324  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:12.169549  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:12.290654  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:12.290659  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:12.564434  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:12.669239  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:12.791421  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:12.791550  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:13.065418  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:13.169546  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:13.290785  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:13.291402  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:13.564668  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:13.670309  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:13.790789  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:13.791037  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:14.064015  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:14.169385  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:14.290270  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0809 18:41:14.290483  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:14.564660  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:14.671505  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:14.862248  824462 kapi.go:107] duration metric: took 51.585977419s to wait for kubernetes.io/minikube-addons=registry ...
	I0809 18:41:14.863314  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:15.063896  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:15.172636  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:15.361730  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:15.563983  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:15.669686  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:15.860531  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:16.063593  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:16.170882  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:16.357420  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:16.564234  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:16.669779  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:16.790678  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:17.063655  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:17.169676  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:17.291728  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:17.563550  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:17.669514  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:17.790916  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:18.065593  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:18.172240  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:18.290637  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:18.564172  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:18.669673  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:18.790152  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:19.063565  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:19.170478  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:19.290624  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:19.563556  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:19.669546  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:19.790864  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:20.064017  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:20.169634  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:20.290987  824462 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0809 18:41:20.563748  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:20.669491  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:20.790104  824462 kapi.go:107] duration metric: took 57.516526291s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0809 18:41:21.063917  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:21.168400  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:21.563718  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:21.672385  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:22.063569  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:22.169872  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:22.563496  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:22.669914  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:23.064052  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:23.169454  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:23.563823  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:23.669040  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:24.063849  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:24.169207  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:24.562972  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0809 18:41:24.668910  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:25.062950  824462 kapi.go:107] duration metric: took 56.081364041s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0809 18:41:25.064723  824462 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-922218 cluster.
	I0809 18:41:25.066217  824462 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0809 18:41:25.067559  824462 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0809 18:41:25.169390  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:25.668506  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:26.168993  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:26.669295  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:27.169194  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:27.669505  824462 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0809 18:41:28.169113  824462 kapi.go:107] duration metric: took 1m4.08236489s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0809 18:41:28.170991  824462 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, inspektor-gadget, default-storageclass, helm-tiller, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0809 18:41:28.172381  824462 addons.go:502] enable addons completed in 1m11.490124466s: enabled=[storage-provisioner cloud-spanner ingress-dns inspektor-gadget default-storageclass helm-tiller metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0809 18:41:28.172418  824462 start.go:233] waiting for cluster config update ...
	I0809 18:41:28.172437  824462 start.go:242] writing updated cluster config ...
	I0809 18:41:28.172738  824462 ssh_runner.go:195] Run: rm -f paused
	I0809 18:41:28.222793  824462 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0809 18:41:28.224341  824462 out.go:177] * Done! kubectl is now configured to use "addons-922218" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.868075202Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=ef402dd0-f770-4f13-b3f3-0841ccfe092d name=/runtime.v1.ImageService/PullImage
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.869039838Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=ad29b640-ba1a-4257-9e34-4d840924d8a1 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.869820372Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ad29b640-ba1a-4257-9e34-4d840924d8a1 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.870733064Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-xkdj9/hello-world-app" id=bd532760-8035-4f31-9670-03a9f2e4a308 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.870834656Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.953033936Z" level=info msg="Created container 81eba5fa191fdbe5baa5f9be5376dee2b97917c1541b5561351c67f2d68d6de7: default/hello-world-app-65bdb79f98-xkdj9/hello-world-app" id=bd532760-8035-4f31-9670-03a9f2e4a308 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.953633544Z" level=info msg="Starting container: 81eba5fa191fdbe5baa5f9be5376dee2b97917c1541b5561351c67f2d68d6de7" id=6d7c93cf-e9ad-4b98-905d-256575e67754 name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 18:44:04 addons-922218 crio[952]: time="2023-08-09 18:44:04.961885426Z" level=info msg="Started container" PID=9439 containerID=81eba5fa191fdbe5baa5f9be5376dee2b97917c1541b5561351c67f2d68d6de7 description=default/hello-world-app-65bdb79f98-xkdj9/hello-world-app id=6d7c93cf-e9ad-4b98-905d-256575e67754 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5aa9701f17ff4be9be8622e79c0a04c137dff8a02a7f9c09fdbd1c42f40dcf57
	Aug 09 18:44:05 addons-922218 crio[952]: time="2023-08-09 18:44:05.212516373Z" level=info msg="Removing container: c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d" id=1470cd41-f6bc-435e-bc3b-9e98b9b8263d name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 09 18:44:05 addons-922218 crio[952]: time="2023-08-09 18:44:05.229146697Z" level=info msg="Removed container c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=1470cd41-f6bc-435e-bc3b-9e98b9b8263d name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 09 18:44:05 addons-922218 crio[952]: time="2023-08-09 18:44:05.782401717Z" level=info msg="Stopping container: 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb (timeout: 1s)" id=40eceaee-ac27-4357-b976-56cb5e9f253f name=/runtime.v1.RuntimeService/StopContainer
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.793743993Z" level=warning msg="Stopping container 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=40eceaee-ac27-4357-b976-56cb5e9f253f name=/runtime.v1.RuntimeService/StopContainer
	Aug 09 18:44:06 addons-922218 conmon[5401]: conmon 04e8e59ca3d8db58371a <ninfo>: container 5413 exited with status 137
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.939841868Z" level=info msg="Stopped container 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb: ingress-nginx/ingress-nginx-controller-7799c6795f-6wk7p/controller" id=40eceaee-ac27-4357-b976-56cb5e9f253f name=/runtime.v1.RuntimeService/StopContainer
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.940407122Z" level=info msg="Stopping pod sandbox: 58f29d4204ae90496d29a1527ccb6ce207d3eb622c8489cf50c48617a418599d" id=7d3df48d-9832-4633-8f13-5f757f28764f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.943791028Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NJPWQJMXLJBK5YAA - [0:0]\n:KUBE-HP-5LVJ3JQE7JWDVQHG - [0:0]\n-X KUBE-HP-NJPWQJMXLJBK5YAA\n-X KUBE-HP-5LVJ3JQE7JWDVQHG\nCOMMIT\n"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.945150513Z" level=info msg="Closing host port tcp:80"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.945183997Z" level=info msg="Closing host port tcp:443"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.946548468Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.946564280Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.946693055Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-6wk7p Namespace:ingress-nginx ID:58f29d4204ae90496d29a1527ccb6ce207d3eb622c8489cf50c48617a418599d UID:53d6e10b-1852-478b-a327-b54f8d65ec3e NetNS:/var/run/netns/5ede5339-9bda-4699-b59b-d3d680c043a3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.946806413Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-6wk7p from CNI network \"kindnet\" (type=ptp)"
	Aug 09 18:44:06 addons-922218 crio[952]: time="2023-08-09 18:44:06.981280472Z" level=info msg="Stopped pod sandbox: 58f29d4204ae90496d29a1527ccb6ce207d3eb622c8489cf50c48617a418599d" id=7d3df48d-9832-4633-8f13-5f757f28764f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 09 18:44:07 addons-922218 crio[952]: time="2023-08-09 18:44:07.221253905Z" level=info msg="Removing container: 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb" id=3f38a4de-ca30-4313-93da-331c08c93d75 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 09 18:44:07 addons-922218 crio[952]: time="2023-08-09 18:44:07.235557412Z" level=info msg="Removed container 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb: ingress-nginx/ingress-nginx-controller-7799c6795f-6wk7p/controller" id=3f38a4de-ca30-4313-93da-331c08c93d75 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81eba5fa191fd       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   5aa9701f17ff4       hello-world-app-65bdb79f98-xkdj9
	67c5911bee58f       docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c                              2 minutes ago       Running             nginx                     0                   bba29045e3006       nginx
	a1bc12e8a59a7       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   7b444ba6b7cda       headlamp-66f6498c69-t8p2k
	a53c17b56d721       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   0ff25c0ce73a7       gcp-auth-58478865f7-tvlv9
	b0ee1561b9b5e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   078b99858beec       ingress-nginx-admission-patch-t6zkl
	8bf9cab6c2493       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   38d624351d9f1       ingress-nginx-admission-create-lpj47
	6bb76b42055e4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   b7b79fa7bb91a       coredns-5d78c9869d-f9mtp
	cdeebb51ba503       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   9561f609da91c       storage-provisioner
	e0a0846098231       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   624c8ac88c142       kindnet-rl2vf
	4640a8941840a       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                                             3 minutes ago       Running             kube-proxy                0                   ea1377de79355       kube-proxy-sn4cp
	0625007ab1eda       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                                             4 minutes ago       Running             kube-scheduler            0                   7c826e48d0438       kube-scheduler-addons-922218
	ca8f3c2f8930b       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                                             4 minutes ago       Running             kube-controller-manager   0                   3e82908ef891e       kube-controller-manager-addons-922218
	56e250a7a7aef       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   91b7fe0c97199       etcd-addons-922218
	b62463e53ea85       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                                             4 minutes ago       Running             kube-apiserver            0                   f24e18b3d47ca       kube-apiserver-addons-922218
	
	* 
	* ==> coredns [6bb76b42055e4b1261c4b4d7b0217b69441ab03523d932cde978c649056f4822] <==
	* [INFO] 10.244.0.16:35180 - 64759 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135747s
	[INFO] 10.244.0.16:37734 - 22560 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00592164s
	[INFO] 10.244.0.16:37734 - 20773 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.008410019s
	[INFO] 10.244.0.16:38105 - 8791 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005848557s
	[INFO] 10.244.0.16:38105 - 63563 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008023917s
	[INFO] 10.244.0.16:41547 - 29862 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004274353s
	[INFO] 10.244.0.16:41547 - 53673 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005847333s
	[INFO] 10.244.0.16:47621 - 3909 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000086976s
	[INFO] 10.244.0.16:47621 - 45897 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117584s
	[INFO] 10.244.0.18:35190 - 43568 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228236s
	[INFO] 10.244.0.18:33740 - 41894 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000332897s
	[INFO] 10.244.0.18:40536 - 62885 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130608s
	[INFO] 10.244.0.18:57535 - 37957 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124886s
	[INFO] 10.244.0.18:47652 - 64996 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012211s
	[INFO] 10.244.0.18:51579 - 2697 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131144s
	[INFO] 10.244.0.18:60565 - 56032 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007522015s
	[INFO] 10.244.0.18:60020 - 3519 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.009991729s
	[INFO] 10.244.0.18:40338 - 59155 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008235008s
	[INFO] 10.244.0.18:39754 - 15420 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008623487s
	[INFO] 10.244.0.18:41610 - 13202 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006453144s
	[INFO] 10.244.0.18:43092 - 39113 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007198386s
	[INFO] 10.244.0.18:38113 - 31619 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.000662004s
	[INFO] 10.244.0.18:44467 - 58773 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000762502s
	[INFO] 10.244.0.20:39211 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119172s
	[INFO] 10.244.0.20:58654 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080046s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-922218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-922218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=addons-922218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T18_40_04_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-922218
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:40:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-922218
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:44:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:43:08 +0000   Wed, 09 Aug 2023 18:39:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:43:08 +0000   Wed, 09 Aug 2023 18:39:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:43:08 +0000   Wed, 09 Aug 2023 18:39:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:43:08 +0000   Wed, 09 Aug 2023 18:40:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-922218
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 270d6b48e580477daf661a282b2b6236
	  System UUID:                97d50004-e5c8-46b0-a901-915078f7a0d0
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-xkdj9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-58478865f7-tvlv9                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  headlamp                    headlamp-66f6498c69-t8p2k                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-5d78c9869d-f9mtp                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-922218                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-rl2vf                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-addons-922218             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-922218    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-sn4cp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-922218             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m52s  kube-proxy       
	  Normal  Starting                 4m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s   kubelet          Node addons-922218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s   kubelet          Node addons-922218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s   kubelet          Node addons-922218 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m57s  node-controller  Node addons-922218 event: Registered Node addons-922218 in Controller
	  Normal  NodeReady                3m23s  kubelet          Node addons-922218 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 be cb 5a fb c3 08 06
	[  +1.180593] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 82 31 b9 dc 2a 0a 08 06
	[  +2.912879] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa 0a 69 ec 93 ef 08 06
	[  +0.302481] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff 1e 62 1a 87 d2 70 08 06
	[ +13.534154] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 c2 9b 3c b5 90 08 06
	[  +0.000368] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 0a 69 ec 93 ef 08 06
	[Aug 9 18:41] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[  +1.023518] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[  +2.015808] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[  +4.159586] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[Aug 9 18:42] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[ +16.126389] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	[ +32.508816] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2e ba 3b 6d 54 fc 76 41 97 cb a2 99 08 00
	
	* 
	* ==> etcd [56e250a7a7aef270d7e2ec8c80340b87976c92a3f07e6c59ddea31645400c020] <==
	* {"level":"info","ts":"2023-08-09T18:39:59.285Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T18:39:59.286Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-09T18:39:59.286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:39:59.287Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:39:59.287Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T18:39:59.287Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-08-09T18:40:19.662Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.154874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-sn4cp\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-08-09T18:40:19.662Z","caller":"traceutil/trace.go:171","msg":"trace[1391753602] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-sn4cp; range_end:; response_count:1; response_revision:380; }","duration":"197.2963ms","start":"2023-08-09T18:40:19.465Z","end":"2023-08-09T18:40:19.662Z","steps":["trace[1391753602] 'range keys from in-memory index tree'  (duration: 194.757544ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:40:19.662Z","caller":"traceutil/trace.go:171","msg":"trace[332408685] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"197.041074ms","start":"2023-08-09T18:40:19.465Z","end":"2023-08-09T18:40:19.662Z","steps":["trace[332408685] 'process raft request'  (duration: 196.930267ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-09T18:40:19.766Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.61051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-09T18:40:19.771Z","caller":"traceutil/trace.go:171","msg":"trace[1743847740] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:381; }","duration":"106.348185ms","start":"2023-08-09T18:40:19.665Z","end":"2023-08-09T18:40:19.771Z","steps":["trace[1743847740] 'range keys from in-memory index tree'  (duration: 100.523972ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-09T18:40:19.772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.437525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-922218\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-08-09T18:40:19.772Z","caller":"traceutil/trace.go:171","msg":"trace[434565889] range","detail":"{range_begin:/registry/minions/addons-922218; range_end:; response_count:1; response_revision:381; }","duration":"101.500197ms","start":"2023-08-09T18:40:19.670Z","end":"2023-08-09T18:40:19.772Z","steps":["trace[434565889] 'range keys from in-memory index tree'  (duration: 101.260839ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:40:20.574Z","caller":"traceutil/trace.go:171","msg":"trace[1765384681] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"101.80262ms","start":"2023-08-09T18:40:20.472Z","end":"2023-08-09T18:40:20.574Z","steps":["trace[1765384681] 'process raft request'  (duration: 101.711497ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:40:20.575Z","caller":"traceutil/trace.go:171","msg":"trace[1461641559] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"112.422954ms","start":"2023-08-09T18:40:20.462Z","end":"2023-08-09T18:40:20.575Z","steps":["trace[1461641559] 'process raft request'  (duration: 110.704791ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:40:20.580Z","caller":"traceutil/trace.go:171","msg":"trace[981470008] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"106.806564ms","start":"2023-08-09T18:40:20.473Z","end":"2023-08-09T18:40:20.580Z","steps":["trace[981470008] 'process raft request'  (duration: 106.533961ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:40:20.580Z","caller":"traceutil/trace.go:171","msg":"trace[58850078] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"116.999148ms","start":"2023-08-09T18:40:20.463Z","end":"2023-08-09T18:40:20.580Z","steps":["trace[58850078] 'process raft request'  (duration: 116.750834ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-09T18:40:21.355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.823007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/registry-proxy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-09T18:40:21.356Z","caller":"traceutil/trace.go:171","msg":"trace[216923116] range","detail":"{range_begin:/registry/daemonsets/kube-system/registry-proxy; range_end:; response_count:0; response_revision:435; }","duration":"179.676785ms","start":"2023-08-09T18:40:21.176Z","end":"2023-08-09T18:40:21.355Z","steps":["trace[216923116] 'agreement among raft nodes before linearized reading'  (duration: 86.455512ms)","trace[216923116] 'range keys from in-memory index tree'  (duration: 92.327726ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-09T18:40:21.356Z","caller":"traceutil/trace.go:171","msg":"trace[1702225764] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"178.484794ms","start":"2023-08-09T18:40:21.177Z","end":"2023-08-09T18:40:21.355Z","steps":["trace[1702225764] 'process raft request'  (duration: 98.083504ms)","trace[1702225764] 'compare'  (duration: 79.419231ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-09T18:40:21.362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.407189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-08-09T18:40:21.362Z","caller":"traceutil/trace.go:171","msg":"trace[1833254861] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:443; }","duration":"107.691407ms","start":"2023-08-09T18:40:21.255Z","end":"2023-08-09T18:40:21.362Z","steps":["trace[1833254861] 'agreement among raft nodes before linearized reading'  (duration: 107.317644ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:41:04.761Z","caller":"traceutil/trace.go:171","msg":"trace[1754292358] transaction","detail":"{read_only:false; response_revision:891; number_of_response:1; }","duration":"104.027407ms","start":"2023-08-09T18:41:04.657Z","end":"2023-08-09T18:41:04.761Z","steps":["trace[1754292358] 'process raft request'  (duration: 97.437785ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:41:05.375Z","caller":"traceutil/trace.go:171","msg":"trace[78993493] transaction","detail":"{read_only:false; response_revision:895; number_of_response:1; }","duration":"108.28619ms","start":"2023-08-09T18:41:05.266Z","end":"2023-08-09T18:41:05.375Z","steps":["trace[78993493] 'process raft request'  (duration: 108.069796ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-09T18:41:05.873Z","caller":"traceutil/trace.go:171","msg":"trace[1783102156] transaction","detail":"{read_only:false; response_revision:896; number_of_response:1; }","duration":"107.36997ms","start":"2023-08-09T18:41:05.766Z","end":"2023-08-09T18:41:05.873Z","steps":["trace[1783102156] 'process raft request'  (duration: 107.217134ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a53c17b56d7219db2694ea3d02e8bd059bc03ee014721ca8f073b9c00cd4ba2a] <==
	* 2023/08/09 18:41:24 GCP Auth Webhook started!
	2023/08/09 18:41:29 Ready to marshal response ...
	2023/08/09 18:41:29 Ready to write response ...
	2023/08/09 18:41:29 Ready to marshal response ...
	2023/08/09 18:41:29 Ready to write response ...
	2023/08/09 18:41:29 Ready to marshal response ...
	2023/08/09 18:41:29 Ready to write response ...
	2023/08/09 18:41:38 Ready to marshal response ...
	2023/08/09 18:41:38 Ready to write response ...
	2023/08/09 18:41:43 Ready to marshal response ...
	2023/08/09 18:41:43 Ready to write response ...
	2023/08/09 18:41:44 Ready to marshal response ...
	2023/08/09 18:41:44 Ready to write response ...
	2023/08/09 18:42:32 Ready to marshal response ...
	2023/08/09 18:42:32 Ready to write response ...
	2023/08/09 18:43:01 Ready to marshal response ...
	2023/08/09 18:43:01 Ready to write response ...
	2023/08/09 18:44:03 Ready to marshal response ...
	2023/08/09 18:44:03 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:44:14 up  2:26,  0 users,  load average: 0.20, 2.14, 3.12
	Linux addons-922218 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [e0a084609823161290d9c3ec12372291b000b67e6f47f6e45077edc3dbafdc99] <==
	* I0809 18:42:09.967454       1 main.go:227] handling current node
	I0809 18:42:19.973985       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:42:19.974008       1 main.go:227] handling current node
	I0809 18:42:29.979175       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:42:29.979199       1 main.go:227] handling current node
	I0809 18:42:39.982528       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:42:39.982556       1 main.go:227] handling current node
	I0809 18:42:49.986904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:42:49.986927       1 main.go:227] handling current node
	I0809 18:42:59.999565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:42:59.999594       1 main.go:227] handling current node
	I0809 18:43:10.003950       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:43:10.003982       1 main.go:227] handling current node
	I0809 18:43:20.016544       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:43:20.016569       1 main.go:227] handling current node
	I0809 18:43:30.029831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:43:30.029853       1 main.go:227] handling current node
	I0809 18:43:40.041417       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:43:40.041442       1 main.go:227] handling current node
	I0809 18:43:50.045791       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:43:50.045824       1 main.go:227] handling current node
	I0809 18:44:00.049316       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:44:00.049338       1 main.go:227] handling current node
	I0809 18:44:10.053883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:44:10.053910       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b62463e53ea858e9345b81e5036b86ca73292af52f3e9b4600872cf2fc25589a] <==
	* I0809 18:42:42.959131       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0809 18:43:05.783190       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0809 18:43:05.783224       1 handler_proxy.go:100] no RequestInfo found in the context
	E0809 18:43:05.783280       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0809 18:43:05.783289       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0809 18:43:17.500111       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.500168       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.510410       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.511290       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.511353       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.518859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.518989       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.530818       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.530855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.555072       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.555142       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.662940       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.663066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0809 18:43:17.663219       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0809 18:43:17.663289       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0809 18:43:18.519329       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0809 18:43:18.663941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0809 18:43:18.673091       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0809 18:44:03.571086       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.100.125.15]
	
	* 
	* ==> kube-controller-manager [ca8f3c2f8930b486e2aef9170daf8977a2c14f5e9651da9509f5ecf1846cfce1] <==
	* E0809 18:43:27.208054       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:27.617941       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:27.617977       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:37.492873       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:37.492915       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:37.719336       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:37.719377       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:39.651552       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:39.651589       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0809 18:43:46.638485       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0809 18:43:46.638520       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:43:46.983445       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0809 18:43:46.983493       1 shared_informer.go:318] Caches are synced for garbage collector
	W0809 18:43:52.261710       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:52.261743       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:56.209493       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:56.209528       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0809 18:43:57.952445       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:43:57.952487       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0809 18:44:03.412054       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0809 18:44:03.422201       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-xkdj9"
	I0809 18:44:05.771316       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0809 18:44:05.775467       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0809 18:44:08.355854       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0809 18:44:08.355889       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [4640a8941840a399f23f78a8b94df6e09a15057f3c544ad7365e3458cb7345b9] <==
	* I0809 18:40:20.766876       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0809 18:40:20.766988       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0809 18:40:20.767027       1 server_others.go:554] "Using iptables proxy"
	I0809 18:40:21.163293       1 server_others.go:192] "Using iptables Proxier"
	I0809 18:40:21.163409       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 18:40:21.163457       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 18:40:21.163493       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 18:40:21.163536       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 18:40:21.164316       1 server.go:658] "Version info" version="v1.27.4"
	I0809 18:40:21.164338       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:40:21.165418       1 config.go:188] "Starting service config controller"
	I0809 18:40:21.165444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 18:40:21.165495       1 config.go:97] "Starting endpoint slice config controller"
	I0809 18:40:21.165506       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 18:40:21.166162       1 config.go:315] "Starting node config controller"
	I0809 18:40:21.166298       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 18:40:21.278129       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 18:40:21.278654       1 shared_informer.go:318] Caches are synced for node config
	I0809 18:40:21.371724       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [0625007ab1edaa5862f9944a6e56f24a00c8fdc3060ee9e1919cf2d52584fc33] <==
	* W0809 18:40:01.072072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0809 18:40:01.075811       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0809 18:40:01.072119       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:40:01.075894       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 18:40:01.071021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0809 18:40:01.075966       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0809 18:40:01.072166       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:40:01.076023       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0809 18:40:01.072211       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:40:01.076088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0809 18:40:01.072259       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 18:40:01.076149       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0809 18:40:01.072332       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:40:01.076208       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0809 18:40:01.072402       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 18:40:01.076252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 18:40:01.969876       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 18:40:01.969909       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0809 18:40:01.969971       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0809 18:40:01.969990       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0809 18:40:02.036822       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 18:40:02.036867       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0809 18:40:02.271149       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 18:40:02.271184       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0809 18:40:04.565772       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 09 18:44:04 addons-922218 kubelet[1568]: E0809 18:44:04.164848    1568 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9445894e8b3d7bb07704e54805505ba144996875842aa253a8c1b472ee19aab0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9445894e8b3d7bb07704e54805505ba144996875842aa253a8c1b472ee19aab0/diff: no such file or directory, extraDiskErr: <nil>
	Aug 09 18:44:04 addons-922218 kubelet[1568]: E0809 18:44:04.167042    1568 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/376ec4be375f25da4e93cb19ca976c14e298d1e1e86eea6ddb71771ca08bbd6d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/376ec4be375f25da4e93cb19ca976c14e298d1e1e86eea6ddb71771ca08bbd6d/diff: no such file or directory, extraDiskErr: <nil>
	Aug 09 18:44:04 addons-922218 kubelet[1568]: I0809 18:44:04.667385    1568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvw5f\" (UniqueName: \"kubernetes.io/projected/eb4544bb-4783-441c-a80f-38a756ea5b6e-kube-api-access-jvw5f\") pod \"eb4544bb-4783-441c-a80f-38a756ea5b6e\" (UID: \"eb4544bb-4783-441c-a80f-38a756ea5b6e\") "
	Aug 09 18:44:04 addons-922218 kubelet[1568]: I0809 18:44:04.669511    1568 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4544bb-4783-441c-a80f-38a756ea5b6e-kube-api-access-jvw5f" (OuterVolumeSpecName: "kube-api-access-jvw5f") pod "eb4544bb-4783-441c-a80f-38a756ea5b6e" (UID: "eb4544bb-4783-441c-a80f-38a756ea5b6e"). InnerVolumeSpecName "kube-api-access-jvw5f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 09 18:44:04 addons-922218 kubelet[1568]: I0809 18:44:04.768229    1568 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jvw5f\" (UniqueName: \"kubernetes.io/projected/eb4544bb-4783-441c-a80f-38a756ea5b6e-kube-api-access-jvw5f\") on node \"addons-922218\" DevicePath \"\""
	Aug 09 18:44:05 addons-922218 kubelet[1568]: I0809 18:44:05.211382    1568 scope.go:115] "RemoveContainer" containerID="c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d"
	Aug 09 18:44:05 addons-922218 kubelet[1568]: I0809 18:44:05.221561    1568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-xkdj9" podStartSLOduration=1.212536396 podCreationTimestamp="2023-08-09 18:44:03 +0000 UTC" firstStartedPulling="2023-08-09 18:44:03.859457852 +0000 UTC m=+239.957490719" lastFinishedPulling="2023-08-09 18:44:04.868432702 +0000 UTC m=+240.966465560" observedRunningTime="2023-08-09 18:44:05.221293093 +0000 UTC m=+241.319325969" watchObservedRunningTime="2023-08-09 18:44:05.221511237 +0000 UTC m=+241.319544124"
	Aug 09 18:44:05 addons-922218 kubelet[1568]: I0809 18:44:05.229445    1568 scope.go:115] "RemoveContainer" containerID="c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d"
	Aug 09 18:44:05 addons-922218 kubelet[1568]: E0809 18:44:05.229961    1568 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d\": container with ID starting with c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d not found: ID does not exist" containerID="c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d"
	Aug 09 18:44:05 addons-922218 kubelet[1568]: I0809 18:44:05.230015    1568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d} err="failed to get container status \"c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d\": rpc error: code = NotFound desc = could not find container \"c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d\": container with ID starting with c96493a504f28f2048a936375b0366fd22a49ac19793bb7c8588cd7c4f3e7a4d not found: ID does not exist"
	Aug 09 18:44:05 addons-922218 kubelet[1568]: E0809 18:44:05.784372    1568 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-6wk7p.1779cb4825824652", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-6wk7p", UID:"53d6e10b-1852-478b-a327-b54f8d65ec3e", APIVersion:"v1", ResourceVersion:"736", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-922218"}, FirstTimestamp:time.Date(2023, time.August, 9, 18, 44, 5, 781710418, time.Local), LastTimestamp:time.Date(2023, time.August, 9, 18, 44, 5, 781710418, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-6wk7p.1779cb4825824652" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 09 18:44:06 addons-922218 kubelet[1568]: I0809 18:44:06.063382    1568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e2a28c2b-779e-4158-b319-e14a2d529ad0 path="/var/lib/kubelet/pods/e2a28c2b-779e-4158-b319-e14a2d529ad0/volumes"
	Aug 09 18:44:06 addons-922218 kubelet[1568]: I0809 18:44:06.063819    1568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=eb4544bb-4783-441c-a80f-38a756ea5b6e path="/var/lib/kubelet/pods/eb4544bb-4783-441c-a80f-38a756ea5b6e/volumes"
	Aug 09 18:44:06 addons-922218 kubelet[1568]: I0809 18:44:06.064162    1568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f0992765-386b-4324-9746-0f97cb86b263 path="/var/lib/kubelet/pods/f0992765-386b-4324-9746-0f97cb86b263/volumes"
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.181937    1568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53d6e10b-1852-478b-a327-b54f8d65ec3e-webhook-cert\") pod \"53d6e10b-1852-478b-a327-b54f8d65ec3e\" (UID: \"53d6e10b-1852-478b-a327-b54f8d65ec3e\") "
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.182004    1568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5tj\" (UniqueName: \"kubernetes.io/projected/53d6e10b-1852-478b-a327-b54f8d65ec3e-kube-api-access-fp5tj\") pod \"53d6e10b-1852-478b-a327-b54f8d65ec3e\" (UID: \"53d6e10b-1852-478b-a327-b54f8d65ec3e\") "
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.184064    1568 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d6e10b-1852-478b-a327-b54f8d65ec3e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "53d6e10b-1852-478b-a327-b54f8d65ec3e" (UID: "53d6e10b-1852-478b-a327-b54f8d65ec3e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.184198    1568 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d6e10b-1852-478b-a327-b54f8d65ec3e-kube-api-access-fp5tj" (OuterVolumeSpecName: "kube-api-access-fp5tj") pod "53d6e10b-1852-478b-a327-b54f8d65ec3e" (UID: "53d6e10b-1852-478b-a327-b54f8d65ec3e"). InnerVolumeSpecName "kube-api-access-fp5tj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.220136    1568 scope.go:115] "RemoveContainer" containerID="04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb"
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.235853    1568 scope.go:115] "RemoveContainer" containerID="04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb"
	Aug 09 18:44:07 addons-922218 kubelet[1568]: E0809 18:44:07.236245    1568 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb\": container with ID starting with 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb not found: ID does not exist" containerID="04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb"
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.236287    1568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb} err="failed to get container status \"04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb\": rpc error: code = NotFound desc = could not find container \"04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb\": container with ID starting with 04e8e59ca3d8db58371a2c8be4eb0b171e27d1f81e66d54166cc3f87e21606eb not found: ID does not exist"
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.282559    1568 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fp5tj\" (UniqueName: \"kubernetes.io/projected/53d6e10b-1852-478b-a327-b54f8d65ec3e-kube-api-access-fp5tj\") on node \"addons-922218\" DevicePath \"\""
	Aug 09 18:44:07 addons-922218 kubelet[1568]: I0809 18:44:07.282598    1568 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/53d6e10b-1852-478b-a327-b54f8d65ec3e-webhook-cert\") on node \"addons-922218\" DevicePath \"\""
	Aug 09 18:44:08 addons-922218 kubelet[1568]: I0809 18:44:08.063255    1568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=53d6e10b-1852-478b-a327-b54f8d65ec3e path="/var/lib/kubelet/pods/53d6e10b-1852-478b-a327-b54f8d65ec3e/volumes"
	
	* 
	* ==> storage-provisioner [cdeebb51ba503b1ca18c6be49fb64930ef99b43df006a87944739240f1c9e2ea] <==
	* I0809 18:40:51.073886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0809 18:40:51.084256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0809 18:40:51.084324       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0809 18:40:51.163662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0809 18:40:51.163927       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-922218_2b04165b-e5b3-4341-aeb2-adbccae64c71!
	I0809 18:40:51.163873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df601683-b97a-4bf5-a4b9-84ecbf8c398d", APIVersion:"v1", ResourceVersion:"807", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-922218_2b04165b-e5b3-4341-aeb2-adbccae64c71 became leader
	I0809 18:40:51.264209       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-922218_2b04165b-e5b3-4341-aeb2-adbccae64c71!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-922218 -n addons-922218
helpers_test.go:261: (dbg) Run:  kubectl --context addons-922218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-849795 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-849795 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.195839711s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-849795 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-849795 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9d285b5a-d85e-4377-9785-c8537829f8df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9d285b5a-d85e-4377-9785-c8537829f8df] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007539139s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0809 18:51:28.242153  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:51:55.927108  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:52:22.134286  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.139597  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.149873  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.170130  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.210434  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.290787  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:22.451188  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-849795 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.122513109s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-849795 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
E0809 18:52:22.771691  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0809 18:52:23.412523  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:24.692880  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:27.255021  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:52:32.375676  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.005365113s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons disable ingress-dns --alsologtostderr -v=1: (1.425457182s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons disable ingress --alsologtostderr -v=1
E0809 18:52:42.616777  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons disable ingress --alsologtostderr -v=1: (7.395915176s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-849795
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-849795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f",
	        "Created": "2023-08-09T18:48:28.703791147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 862740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T18:48:28.978043317Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51eee4927f7e218e70017d38db072c77f0b6036bbfe389eac8043694e7529d58",
	        "ResolvConfPath": "/var/lib/docker/containers/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f/hostname",
	        "HostsPath": "/var/lib/docker/containers/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f/hosts",
	        "LogPath": "/var/lib/docker/containers/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f-json.log",
	        "Name": "/ingress-addon-legacy-849795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-849795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-849795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b36e753b685958989aa6ec9049e3af147a00582e7e4720fa6715e5ba5671da01-init/diff:/var/lib/docker/overlay2/dffcbda35d4e6780372e77e03c9f976a612c164e3ac348da817dd7b6996e96fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b36e753b685958989aa6ec9049e3af147a00582e7e4720fa6715e5ba5671da01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b36e753b685958989aa6ec9049e3af147a00582e7e4720fa6715e5ba5671da01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b36e753b685958989aa6ec9049e3af147a00582e7e4720fa6715e5ba5671da01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-849795",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-849795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-849795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-849795",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-849795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "068bd7b844067f0fe255e7b38e51ce08f2223c843e058d64c9c62cfb7496a33e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/068bd7b84406",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-849795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "734105a6199e",
	                        "ingress-addon-legacy-849795"
	                    ],
	                    "NetworkID": "b301655c7756d581e079019cfecaccaf1875d58d1e03c5df873f79ae6f0f0076",
	                    "EndpointID": "a470cdd852d1491bca3440dcdea14fe41930fafb6624eb0b1146eaee04ef262b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-849795 -n ingress-addon-legacy-849795
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-849795 logs -n 25: (1.065787866s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-421935 ssh stat                                               | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC | 09 Aug 23 18:47 UTC |
	|         | /mount-9p/created-by-pod                                                 |                             |         |         |                     |                     |
	| image   | functional-421935                                                        | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC | 09 Aug 23 18:47 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh sudo                                               | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC | 09 Aug 23 18:47 UTC |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| image   | functional-421935                                                        | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC | 09 Aug 23 18:47 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| mount   | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port3216747073/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:47 UTC | 09 Aug 23 18:48 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh -- ls                                              | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:48 UTC |
	|         | -la /mount-9p                                                            |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh sudo                                               | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| mount   | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:48 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:48 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| ssh     | functional-421935 ssh findmnt                                            | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:48 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-421935                                                     | functional-421935           | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:48 UTC |
	| start   | -p ingress-addon-legacy-849795                                           | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:48 UTC | 09 Aug 23 18:49 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-849795                                              | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:49 UTC | 09 Aug 23 18:49 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-849795                                              | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:49 UTC | 09 Aug 23 18:49 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-849795                                              | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:50 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-849795 ip                                           | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:52 UTC | 09 Aug 23 18:52 UTC |
	| addons  | ingress-addon-legacy-849795                                              | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:52 UTC | 09 Aug 23 18:52 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-849795                                              | ingress-addon-legacy-849795 | jenkins | v1.31.1 | 09 Aug 23 18:52 UTC | 09 Aug 23 18:52 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:48:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:48:13.291732  862090 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:48:13.291930  862090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:48:13.291940  862090 out.go:309] Setting ErrFile to fd 2...
	I0809 18:48:13.291945  862090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:48:13.292153  862090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 18:48:13.292788  862090 out.go:303] Setting JSON to false
	I0809 18:48:13.293861  862090 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9048,"bootTime":1691597845,"procs":418,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:48:13.293925  862090 start.go:138] virtualization: kvm guest
	I0809 18:48:13.296122  862090 out.go:177] * [ingress-addon-legacy-849795] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:48:13.297725  862090 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 18:48:13.299145  862090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:48:13.297761  862090 notify.go:220] Checking for updates...
	I0809 18:48:13.301814  862090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:48:13.303247  862090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:48:13.304436  862090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 18:48:13.305874  862090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 18:48:13.307376  862090 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:48:13.329216  862090 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:48:13.329352  862090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:48:13.380655  862090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-09 18:48:13.372006289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:48:13.380766  862090 docker.go:294] overlay module found
	I0809 18:48:13.383287  862090 out.go:177] * Using the docker driver based on user configuration
	I0809 18:48:13.384649  862090 start.go:298] selected driver: docker
	I0809 18:48:13.384661  862090 start.go:901] validating driver "docker" against <nil>
	I0809 18:48:13.384674  862090 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 18:48:13.385515  862090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:48:13.436574  862090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-09 18:48:13.428202278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:48:13.436765  862090 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 18:48:13.436990  862090 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 18:48:13.438790  862090 out.go:177] * Using Docker driver with root privileges
	I0809 18:48:13.440212  862090 cni.go:84] Creating CNI manager for ""
	I0809 18:48:13.440232  862090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:48:13.440260  862090 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 18:48:13.440277  862090 start_flags.go:319] config:
	{Name:ingress-addon-legacy-849795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-849795 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:48:13.441879  862090 out.go:177] * Starting control plane node ingress-addon-legacy-849795 in cluster ingress-addon-legacy-849795
	I0809 18:48:13.443135  862090 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:48:13.444612  862090 out.go:177] * Pulling base image ...
	I0809 18:48:13.445871  862090 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0809 18:48:13.445960  862090 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:48:13.462307  862090 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 18:48:13.462340  862090 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 18:48:13.468542  862090 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0809 18:48:13.468568  862090 cache.go:57] Caching tarball of preloaded images
	I0809 18:48:13.468720  862090 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0809 18:48:13.470543  862090 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0809 18:48:13.471802  862090 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:48:13.504981  862090 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0809 18:48:20.488595  862090 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:48:20.488696  862090 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:48:21.437589  862090 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0809 18:48:21.437959  862090 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/config.json ...
	I0809 18:48:21.437991  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/config.json: {Name:mk9f53a3735aa4fc3ebb4c0b3a7786f02e9fc6d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:21.438182  862090 cache.go:195] Successfully downloaded all kic artifacts
	I0809 18:48:21.438216  862090 start.go:365] acquiring machines lock for ingress-addon-legacy-849795: {Name:mk1914ff6e1b0638be7c07081e41af0dc4dd35fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 18:48:21.438258  862090 start.go:369] acquired machines lock for "ingress-addon-legacy-849795" in 31.542µs
	I0809 18:48:21.438277  862090 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-849795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-849795 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:48:21.438346  862090 start.go:125] createHost starting for "" (driver="docker")
	I0809 18:48:21.440694  862090 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0809 18:48:21.440959  862090 start.go:159] libmachine.API.Create for "ingress-addon-legacy-849795" (driver="docker")
	I0809 18:48:21.440991  862090 client.go:168] LocalClient.Create starting
	I0809 18:48:21.441058  862090 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 18:48:21.441086  862090 main.go:141] libmachine: Decoding PEM data...
	I0809 18:48:21.441103  862090 main.go:141] libmachine: Parsing certificate...
	I0809 18:48:21.441160  862090 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 18:48:21.441182  862090 main.go:141] libmachine: Decoding PEM data...
	I0809 18:48:21.441194  862090 main.go:141] libmachine: Parsing certificate...
	I0809 18:48:21.441464  862090 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-849795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0809 18:48:21.456982  862090 cli_runner.go:211] docker network inspect ingress-addon-legacy-849795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0809 18:48:21.457055  862090 network_create.go:281] running [docker network inspect ingress-addon-legacy-849795] to gather additional debugging logs...
	I0809 18:48:21.457076  862090 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-849795
	W0809 18:48:21.472739  862090 cli_runner.go:211] docker network inspect ingress-addon-legacy-849795 returned with exit code 1
	I0809 18:48:21.472771  862090 network_create.go:284] error running [docker network inspect ingress-addon-legacy-849795]: docker network inspect ingress-addon-legacy-849795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-849795 not found
	I0809 18:48:21.472786  862090 network_create.go:286] output of [docker network inspect ingress-addon-legacy-849795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-849795 not found
	
	** /stderr **
	I0809 18:48:21.472857  862090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:48:21.489968  862090 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f94820}
	I0809 18:48:21.490024  862090 network_create.go:123] attempt to create docker network ingress-addon-legacy-849795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0809 18:48:21.490095  862090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-849795 ingress-addon-legacy-849795
	I0809 18:48:21.541601  862090 network_create.go:107] docker network ingress-addon-legacy-849795 192.168.49.0/24 created
	I0809 18:48:21.541637  862090 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-849795" container
	I0809 18:48:21.541713  862090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 18:48:21.556602  862090 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-849795 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-849795 --label created_by.minikube.sigs.k8s.io=true
	I0809 18:48:21.572792  862090 oci.go:103] Successfully created a docker volume ingress-addon-legacy-849795
	I0809 18:48:21.572891  862090 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-849795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-849795 --entrypoint /usr/bin/test -v ingress-addon-legacy-849795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 18:48:23.310429  862090 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-849795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-849795 --entrypoint /usr/bin/test -v ingress-addon-legacy-849795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib: (1.737490092s)
	I0809 18:48:23.310460  862090 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-849795
	I0809 18:48:23.310507  862090 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0809 18:48:23.310536  862090 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 18:48:23.310606  862090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-849795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 18:48:28.632878  862090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-849795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (5.322206709s)
	I0809 18:48:28.632917  862090 kic.go:199] duration metric: took 5.322375 seconds to extract preloaded images to volume
	W0809 18:48:28.633052  862090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0809 18:48:28.633139  862090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0809 18:48:28.688828  862090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-849795 --name ingress-addon-legacy-849795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-849795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-849795 --network ingress-addon-legacy-849795 --ip 192.168.49.2 --volume ingress-addon-legacy-849795:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 18:48:28.985498  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Running}}
	I0809 18:48:29.002471  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:48:29.020357  862090 cli_runner.go:164] Run: docker exec ingress-addon-legacy-849795 stat /var/lib/dpkg/alternatives/iptables
	I0809 18:48:29.084362  862090 oci.go:144] the created container "ingress-addon-legacy-849795" has a running status.
	I0809 18:48:29.084392  862090 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa...
	I0809 18:48:29.243516  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0809 18:48:29.243593  862090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0809 18:48:29.262795  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:48:29.284497  862090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0809 18:48:29.284521  862090 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-849795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0809 18:48:29.355521  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:48:29.374337  862090 machine.go:88] provisioning docker machine ...
	I0809 18:48:29.374384  862090 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-849795"
	I0809 18:48:29.374467  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:29.397751  862090 main.go:141] libmachine: Using SSH client type: native
	I0809 18:48:29.398475  862090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I0809 18:48:29.398503  862090 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-849795 && echo "ingress-addon-legacy-849795" | sudo tee /etc/hostname
	I0809 18:48:29.399231  862090 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0809 18:48:32.550383  862090 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-849795
	
	I0809 18:48:32.550498  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:32.566683  862090 main.go:141] libmachine: Using SSH client type: native
	I0809 18:48:32.567085  862090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I0809 18:48:32.567108  862090 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-849795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-849795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-849795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 18:48:32.704015  862090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 18:48:32.704047  862090 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 18:48:32.704066  862090 ubuntu.go:177] setting up certificates
	I0809 18:48:32.704077  862090 provision.go:83] configureAuth start
	I0809 18:48:32.704144  862090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-849795
	I0809 18:48:32.721040  862090 provision.go:138] copyHostCerts
	I0809 18:48:32.721096  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:48:32.721131  862090 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 18:48:32.721140  862090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:48:32.721213  862090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 18:48:32.721302  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:48:32.721322  862090 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 18:48:32.721326  862090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:48:32.721350  862090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 18:48:32.721404  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:48:32.721421  862090 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 18:48:32.721432  862090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:48:32.721454  862090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 18:48:32.721513  862090 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-849795 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-849795]
	I0809 18:48:32.846358  862090 provision.go:172] copyRemoteCerts
	I0809 18:48:32.846426  862090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 18:48:32.846465  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:32.863316  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:48:32.960338  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0809 18:48:32.960398  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 18:48:32.982148  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0809 18:48:32.982231  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0809 18:48:33.003263  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0809 18:48:33.003334  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0809 18:48:33.024858  862090 provision.go:86] duration metric: configureAuth took 320.764733ms
	I0809 18:48:33.024892  862090 ubuntu.go:193] setting minikube options for container-runtime
	I0809 18:48:33.025054  862090 config.go:182] Loaded profile config "ingress-addon-legacy-849795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0809 18:48:33.025179  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:33.041395  862090 main.go:141] libmachine: Using SSH client type: native
	I0809 18:48:33.041796  862090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I0809 18:48:33.041812  862090 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 18:48:33.286713  862090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 18:48:33.286739  862090 machine.go:91] provisioned docker machine in 3.912371268s
	I0809 18:48:33.286751  862090 client.go:171] LocalClient.Create took 11.845750812s
	I0809 18:48:33.286774  862090 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-849795" took 11.845814082s
	I0809 18:48:33.286785  862090 start.go:300] post-start starting for "ingress-addon-legacy-849795" (driver="docker")
	I0809 18:48:33.286798  862090 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 18:48:33.286911  862090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 18:48:33.286964  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:33.303544  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:48:33.400520  862090 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 18:48:33.403545  862090 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 18:48:33.403576  862090 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 18:48:33.403584  862090 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 18:48:33.403592  862090 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 18:48:33.403603  862090 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 18:48:33.403676  862090 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 18:48:33.403778  862090 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 18:48:33.403789  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /etc/ssl/certs/8234342.pem
	I0809 18:48:33.403872  862090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 18:48:33.411760  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:48:33.434353  862090 start.go:303] post-start completed in 147.551961ms
	I0809 18:48:33.434728  862090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-849795
	I0809 18:48:33.451168  862090 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/config.json ...
	I0809 18:48:33.451434  862090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 18:48:33.451475  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:33.468531  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:48:33.560721  862090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 18:48:33.565250  862090 start.go:128] duration metric: createHost completed in 12.126888697s
	I0809 18:48:33.565283  862090 start.go:83] releasing machines lock for "ingress-addon-legacy-849795", held for 12.127014817s
	I0809 18:48:33.565360  862090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-849795
	I0809 18:48:33.582215  862090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 18:48:33.582259  862090 ssh_runner.go:195] Run: cat /version.json
	I0809 18:48:33.582308  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:33.582316  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:48:33.599822  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:48:33.600532  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:48:33.781902  862090 ssh_runner.go:195] Run: systemctl --version
	I0809 18:48:33.786400  862090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 18:48:33.925802  862090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 18:48:33.930394  862090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:48:33.949633  862090 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 18:48:33.949713  862090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:48:33.978137  862090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0809 18:48:33.978161  862090 start.go:466] detecting cgroup driver to use...
	I0809 18:48:33.978193  862090 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 18:48:33.978245  862090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 18:48:33.993417  862090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 18:48:34.004972  862090 docker.go:196] disabling cri-docker service (if available) ...
	I0809 18:48:34.005023  862090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 18:48:34.018760  862090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 18:48:34.032275  862090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 18:48:34.107163  862090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 18:48:34.183104  862090 docker.go:212] disabling docker service ...
	I0809 18:48:34.183184  862090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 18:48:34.201706  862090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 18:48:34.213266  862090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 18:48:34.289345  862090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 18:48:34.375060  862090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 18:48:34.386123  862090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 18:48:34.401872  862090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0809 18:48:34.401954  862090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:48:34.411745  862090 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 18:48:34.411822  862090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:48:34.421462  862090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:48:34.430855  862090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:48:34.440502  862090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 18:48:34.449817  862090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 18:48:34.457768  862090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 18:48:34.466113  862090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 18:48:34.541028  862090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 18:48:34.655836  862090 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 18:48:34.655905  862090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 18:48:34.659405  862090 start.go:534] Will wait 60s for crictl version
	I0809 18:48:34.659483  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:34.662546  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 18:48:34.698272  862090 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 18:48:34.698358  862090 ssh_runner.go:195] Run: crio --version
	I0809 18:48:34.732164  862090 ssh_runner.go:195] Run: crio --version
	I0809 18:48:34.767002  862090 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0809 18:48:34.768419  862090 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-849795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:48:34.784784  862090 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0809 18:48:34.788323  862090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:48:34.798535  862090 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0809 18:48:34.798645  862090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:48:34.841950  862090 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0809 18:48:34.842022  862090 ssh_runner.go:195] Run: which lz4
	I0809 18:48:34.845341  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0809 18:48:34.845423  862090 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0809 18:48:34.848450  862090 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0809 18:48:34.848485  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0809 18:48:35.779051  862090 crio.go:444] Took 0.933639 seconds to copy over tarball
	I0809 18:48:35.779131  862090 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0809 18:48:38.061438  862090 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.28228095s)
	I0809 18:48:38.061462  862090 crio.go:451] Took 2.282368 seconds to extract the tarball
	I0809 18:48:38.061472  862090 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0809 18:48:38.134821  862090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:48:38.167052  862090 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0809 18:48:38.167077  862090 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0809 18:48:38.167158  862090 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:48:38.167166  862090 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 18:48:38.167185  862090 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0809 18:48:38.167205  862090 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 18:48:38.167272  862090 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0809 18:48:38.167321  862090 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0809 18:48:38.167199  862090 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 18:48:38.167337  862090 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 18:48:38.168671  862090 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:48:38.168681  862090 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0809 18:48:38.168698  862090 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0809 18:48:38.168696  862090 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 18:48:38.168708  862090 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 18:48:38.168704  862090 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0809 18:48:38.168717  862090 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 18:48:38.168751  862090 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 18:48:38.330907  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0809 18:48:38.337776  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0809 18:48:38.341305  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0809 18:48:38.342055  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0809 18:48:38.342703  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0809 18:48:38.354246  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 18:48:38.372311  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0809 18:48:38.450789  862090 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:48:38.461986  862090 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0809 18:48:38.462093  862090 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0809 18:48:38.462173  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509374  862090 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0809 18:48:38.509434  862090 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0809 18:48:38.509433  862090 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0809 18:48:38.509452  862090 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0809 18:48:38.509471  862090 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0809 18:48:38.509484  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509499  862090 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0809 18:48:38.509529  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509484  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509533  862090 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0809 18:48:38.509377  862090 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0809 18:48:38.509579  862090 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0809 18:48:38.509596  862090 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 18:48:38.509609  862090 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0809 18:48:38.509602  862090 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0809 18:48:38.509634  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509650  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.509659  862090 ssh_runner.go:195] Run: which crictl
	I0809 18:48:38.596680  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0809 18:48:38.596780  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0809 18:48:38.596785  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0809 18:48:38.596887  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0809 18:48:38.596919  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0809 18:48:38.596932  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0809 18:48:38.597018  862090 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0809 18:48:38.680566  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0809 18:48:38.680617  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0809 18:48:38.680659  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0809 18:48:38.759421  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0809 18:48:38.759486  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0809 18:48:38.759518  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0809 18:48:38.759617  862090 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0809 18:48:38.759678  862090 cache_images.go:92] LoadImages completed in 592.586264ms
	W0809 18:48:38.759769  862090 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0809 18:48:38.759824  862090 ssh_runner.go:195] Run: crio config
	I0809 18:48:38.802490  862090 cni.go:84] Creating CNI manager for ""
	I0809 18:48:38.802509  862090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:48:38.802521  862090 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 18:48:38.802538  862090 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-849795 NodeName:ingress-addon-legacy-849795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0809 18:48:38.802732  862090 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-849795"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 18:48:38.802826  862090 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-849795 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-849795 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 18:48:38.802880  862090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0809 18:48:38.811390  862090 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 18:48:38.811450  862090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 18:48:38.819651  862090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0809 18:48:38.835450  862090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0809 18:48:38.851675  862090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0809 18:48:38.867380  862090 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0809 18:48:38.870565  862090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:48:38.880069  862090 certs.go:56] Setting up /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795 for IP: 192.168.49.2
	I0809 18:48:38.880100  862090 certs.go:190] acquiring lock for shared ca certs: {Name:mk19b72d6df3cc07014c8108931f9946a7850469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:38.880276  862090 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key
	I0809 18:48:38.880320  862090 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key
	I0809 18:48:38.880364  862090 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key
	I0809 18:48:38.880377  862090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt with IP's: []
	I0809 18:48:39.120843  862090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt ...
	I0809 18:48:39.120877  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: {Name:mk9363a359b9e9de9cc204620a901593180c7498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.121127  862090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key ...
	I0809 18:48:39.121155  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key: {Name:mkaa1e2cc6dae93adbc460233abd4b0ee3b50798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.121280  862090 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key.dd3b5fb2
	I0809 18:48:39.121296  862090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 18:48:39.373445  862090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt.dd3b5fb2 ...
	I0809 18:48:39.373478  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt.dd3b5fb2: {Name:mk86533cd94d4d777371ee3cd1c79237bb97921b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.373672  862090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key.dd3b5fb2 ...
	I0809 18:48:39.373688  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key.dd3b5fb2: {Name:mkdb82111440e622ebfc4263c112551b30677e65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.373780  862090 certs.go:337] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt
	I0809 18:48:39.373876  862090 certs.go:341] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key
	I0809 18:48:39.373935  862090 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.key
	I0809 18:48:39.373950  862090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.crt with IP's: []
	I0809 18:48:39.623215  862090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.crt ...
	I0809 18:48:39.623253  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.crt: {Name:mkfd6d8d99938e545d6642d5ff73e724bd52da64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.623447  862090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.key ...
	I0809 18:48:39.623462  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.key: {Name:mk6b97fd8cc6aa19d7e8cf3300289e040e2e3f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:48:39.623559  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0809 18:48:39.623587  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0809 18:48:39.623602  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0809 18:48:39.623614  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0809 18:48:39.623624  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0809 18:48:39.623658  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0809 18:48:39.623671  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0809 18:48:39.623680  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0809 18:48:39.623738  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem (1338 bytes)
	W0809 18:48:39.623778  862090 certs.go:433] ignoring /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434_empty.pem, impossibly tiny 0 bytes
	I0809 18:48:39.623789  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem (1675 bytes)
	I0809 18:48:39.623816  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem (1082 bytes)
	I0809 18:48:39.623842  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem (1123 bytes)
	I0809 18:48:39.623864  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem (1679 bytes)
	I0809 18:48:39.623903  862090 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:48:39.623930  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:48:39.623942  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem -> /usr/share/ca-certificates/823434.pem
	I0809 18:48:39.623956  862090 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /usr/share/ca-certificates/8234342.pem
	I0809 18:48:39.624647  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 18:48:39.646726  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0809 18:48:39.667517  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 18:48:39.688127  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0809 18:48:39.708851  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 18:48:39.729606  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0809 18:48:39.750476  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 18:48:39.771357  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0809 18:48:39.791781  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 18:48:39.812780  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem --> /usr/share/ca-certificates/823434.pem (1338 bytes)
	I0809 18:48:39.833173  862090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /usr/share/ca-certificates/8234342.pem (1708 bytes)
	I0809 18:48:39.854162  862090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 18:48:39.869430  862090 ssh_runner.go:195] Run: openssl version
	I0809 18:48:39.874410  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 18:48:39.882445  862090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:48:39.885723  862090 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:48:39.885769  862090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:48:39.892105  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 18:48:39.900147  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/823434.pem && ln -fs /usr/share/ca-certificates/823434.pem /etc/ssl/certs/823434.pem"
	I0809 18:48:39.907935  862090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/823434.pem
	I0809 18:48:39.910995  862090 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 18:48:39.911044  862090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/823434.pem
	I0809 18:48:39.917245  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/823434.pem /etc/ssl/certs/51391683.0"
	I0809 18:48:39.925468  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8234342.pem && ln -fs /usr/share/ca-certificates/8234342.pem /etc/ssl/certs/8234342.pem"
	I0809 18:48:39.933727  862090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8234342.pem
	I0809 18:48:39.936717  862090 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 18:48:39.936762  862090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8234342.pem
	I0809 18:48:39.942803  862090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8234342.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 18:48:39.951624  862090 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 18:48:39.954936  862090 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:48:39.954985  862090 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-849795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-849795 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:48:39.955085  862090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0809 18:48:39.955129  862090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0809 18:48:39.988966  862090 cri.go:89] found id: ""
	I0809 18:48:39.989024  862090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 18:48:39.997083  862090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 18:48:40.005126  862090 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0809 18:48:40.005190  862090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 18:48:40.012861  862090 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 18:48:40.012910  862090 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0809 18:48:40.054648  862090 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0809 18:48:40.054768  862090 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 18:48:40.092772  862090 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0809 18:48:40.092863  862090 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0809 18:48:40.092918  862090 kubeadm.go:322] OS: Linux
	I0809 18:48:40.092982  862090 kubeadm.go:322] CGROUPS_CPU: enabled
	I0809 18:48:40.093053  862090 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0809 18:48:40.093120  862090 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0809 18:48:40.093196  862090 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0809 18:48:40.093264  862090 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0809 18:48:40.093337  862090 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0809 18:48:40.159853  862090 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 18:48:40.160047  862090 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 18:48:40.160191  862090 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 18:48:40.338279  862090 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 18:48:40.339142  862090 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 18:48:40.339219  862090 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 18:48:40.412278  862090 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 18:48:40.416073  862090 out.go:204]   - Generating certificates and keys ...
	I0809 18:48:40.416229  862090 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 18:48:40.416333  862090 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 18:48:40.587331  862090 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 18:48:40.691814  862090 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 18:48:40.805290  862090 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 18:48:41.037050  862090 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 18:48:41.100114  862090 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 18:48:41.100309  862090 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-849795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0809 18:48:41.239405  862090 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 18:48:41.239569  862090 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-849795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0809 18:48:41.353900  862090 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 18:48:41.547267  862090 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 18:48:41.686487  862090 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 18:48:41.686638  862090 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 18:48:41.991369  862090 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 18:48:42.060160  862090 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 18:48:42.270460  862090 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 18:48:42.621500  862090 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 18:48:42.622089  862090 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 18:48:42.624386  862090 out.go:204]   - Booting up control plane ...
	I0809 18:48:42.624519  862090 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 18:48:42.628763  862090 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 18:48:42.629788  862090 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 18:48:42.630499  862090 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 18:48:42.634774  862090 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 18:48:49.137343  862090 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502544 seconds
	I0809 18:48:49.137524  862090 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 18:48:49.147934  862090 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 18:48:49.663333  862090 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 18:48:49.663585  862090 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-849795 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0809 18:48:50.171179  862090 kubeadm.go:322] [bootstrap-token] Using token: 1dxgzj.isdfpqm6ltv8k288
	I0809 18:48:50.172728  862090 out.go:204]   - Configuring RBAC rules ...
	I0809 18:48:50.172840  862090 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 18:48:50.176556  862090 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 18:48:50.182231  862090 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 18:48:50.184072  862090 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 18:48:50.185839  862090 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 18:48:50.187552  862090 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 18:48:50.199364  862090 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 18:48:50.405647  862090 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 18:48:50.585675  862090 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 18:48:50.587287  862090 kubeadm.go:322] 
	I0809 18:48:50.587377  862090 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 18:48:50.587395  862090 kubeadm.go:322] 
	I0809 18:48:50.587527  862090 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 18:48:50.587548  862090 kubeadm.go:322] 
	I0809 18:48:50.587580  862090 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 18:48:50.587673  862090 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 18:48:50.587788  862090 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 18:48:50.587806  862090 kubeadm.go:322] 
	I0809 18:48:50.587875  862090 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 18:48:50.588005  862090 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 18:48:50.588129  862090 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 18:48:50.588170  862090 kubeadm.go:322] 
	I0809 18:48:50.588301  862090 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 18:48:50.588440  862090 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 18:48:50.588456  862090 kubeadm.go:322] 
	I0809 18:48:50.588590  862090 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1dxgzj.isdfpqm6ltv8k288 \
	I0809 18:48:50.588740  862090 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c \
	I0809 18:48:50.588782  862090 kubeadm.go:322]     --control-plane 
	I0809 18:48:50.588791  862090 kubeadm.go:322] 
	I0809 18:48:50.588925  862090 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 18:48:50.588935  862090 kubeadm.go:322] 
	I0809 18:48:50.589044  862090 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1dxgzj.isdfpqm6ltv8k288 \
	I0809 18:48:50.589839  862090 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c 
	I0809 18:48:50.591084  862090 kubeadm.go:322] W0809 18:48:40.054181    1364 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0809 18:48:50.591303  862090 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0809 18:48:50.591416  862090 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 18:48:50.591583  862090 kubeadm.go:322] W0809 18:48:42.628450    1364 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0809 18:48:50.591819  862090 kubeadm.go:322] W0809 18:48:42.629556    1364 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0809 18:48:50.591840  862090 cni.go:84] Creating CNI manager for ""
	I0809 18:48:50.591851  862090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:48:50.593995  862090 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0809 18:48:50.595616  862090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0809 18:48:50.599850  862090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0809 18:48:50.599864  862090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0809 18:48:50.618186  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0809 18:48:51.089172  862090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 18:48:51.089257  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:51.089257  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=ingress-addon-legacy-849795 minikube.k8s.io/updated_at=2023_08_09T18_48_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:51.184671  862090 ops.go:34] apiserver oom_adj: -16
	I0809 18:48:51.184730  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:51.270205  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:51.834841  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:52.334537  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:52.835274  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:53.334957  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:53.834455  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:54.334342  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:54.835251  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:55.334378  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:55.834563  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:56.334530  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:56.834313  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:57.334594  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:57.834384  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:58.334800  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:58.834193  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:59.334249  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:48:59.834265  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:00.334596  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:00.834878  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:01.334761  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:01.834823  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:02.335116  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:02.834947  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:03.334422  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:03.834638  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:04.334601  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:04.834308  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:05.334280  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:05.834270  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:06.334839  862090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:49:06.402648  862090 kubeadm.go:1081] duration metric: took 15.31347021s to wait for elevateKubeSystemPrivileges.
	I0809 18:49:06.402709  862090 kubeadm.go:406] StartCluster complete in 26.44772843s
	I0809 18:49:06.402736  862090 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:49:06.402828  862090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:49:06.403681  862090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:49:06.403946  862090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 18:49:06.404043  862090 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0809 18:49:06.404147  862090 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-849795"
	I0809 18:49:06.404172  862090 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-849795"
	I0809 18:49:06.404173  862090 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-849795"
	I0809 18:49:06.404184  862090 config.go:182] Loaded profile config "ingress-addon-legacy-849795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0809 18:49:06.404208  862090 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-849795"
	I0809 18:49:06.404235  862090 host.go:66] Checking if "ingress-addon-legacy-849795" exists ...
	I0809 18:49:06.404605  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:49:06.404700  862090 kapi.go:59] client config for ingress-addon-legacy-849795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:49:06.404819  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:49:06.405615  862090 cert_rotation.go:137] Starting client certificate rotation controller
	I0809 18:49:06.426896  862090 kapi.go:59] client config for ingress-addon-legacy-849795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:49:06.427285  862090 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-849795" context rescaled to 1 replicas
	I0809 18:49:06.427327  862090 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:49:06.429610  862090 out.go:177] * Verifying Kubernetes components...
	I0809 18:49:06.431455  862090 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:49:06.431486  862090 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-849795"
	I0809 18:49:06.433295  862090 host.go:66] Checking if "ingress-addon-legacy-849795" exists ...
	I0809 18:49:06.433860  862090 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-849795 --format={{.State.Status}}
	I0809 18:49:06.431547  862090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:49:06.434251  862090 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:49:06.434435  862090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 18:49:06.434522  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:49:06.451324  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:49:06.452601  862090 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 18:49:06.452625  862090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 18:49:06.452681  862090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-849795
	I0809 18:49:06.486816  862090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/ingress-addon-legacy-849795/id_rsa Username:docker}
	I0809 18:49:06.587078  862090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 18:49:06.587729  862090 kapi.go:59] client config for ingress-addon-legacy-849795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:49:06.588106  862090 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-849795" to be "Ready" ...
	I0809 18:49:06.674783  862090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:49:06.678883  862090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 18:49:07.095611  862090 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0809 18:49:07.280560  862090 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0809 18:49:07.281866  862090 addons.go:502] enable addons completed in 877.829652ms: enabled=[storage-provisioner default-storageclass]
	I0809 18:49:08.597765  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:10.597987  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:12.598349  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:15.098330  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:17.598385  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:20.098312  862090 node_ready.go:58] node "ingress-addon-legacy-849795" has status "Ready":"False"
	I0809 18:49:21.098251  862090 node_ready.go:49] node "ingress-addon-legacy-849795" has status "Ready":"True"
	I0809 18:49:21.098280  862090 node_ready.go:38] duration metric: took 14.510148842s waiting for node "ingress-addon-legacy-849795" to be "Ready" ...
	I0809 18:49:21.098289  862090 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:49:21.104734  862090 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:23.111782  862090 pod_ready.go:102] pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-09 18:49:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0809 18:49:25.612416  862090 pod_ready.go:102] pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-09 18:49:05 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0809 18:49:27.614092  862090 pod_ready.go:102] pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace has status "Ready":"False"
	I0809 18:49:29.614657  862090 pod_ready.go:102] pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace has status "Ready":"False"
	I0809 18:49:32.114190  862090 pod_ready.go:92] pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.114218  862090 pod_ready.go:81] duration metric: took 11.009456939s waiting for pod "coredns-66bff467f8-nnj5q" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.114230  862090 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.118309  862090 pod_ready.go:92] pod "etcd-ingress-addon-legacy-849795" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.118329  862090 pod_ready.go:81] duration metric: took 4.091339ms waiting for pod "etcd-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.118344  862090 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.122128  862090 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-849795" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.122148  862090 pod_ready.go:81] duration metric: took 3.79741ms waiting for pod "kube-apiserver-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.122169  862090 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.125974  862090 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-849795" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.125995  862090 pod_ready.go:81] duration metric: took 3.818734ms waiting for pod "kube-controller-manager-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.126009  862090 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7hlsb" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.129656  862090 pod_ready.go:92] pod "kube-proxy-7hlsb" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.129673  862090 pod_ready.go:81] duration metric: took 3.656105ms waiting for pod "kube-proxy-7hlsb" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.129681  862090 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.310118  862090 request.go:628] Waited for 180.327605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-849795
	I0809 18:49:32.510166  862090 request.go:628] Waited for 197.370559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-849795
	I0809 18:49:32.513095  862090 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-849795" in "kube-system" namespace has status "Ready":"True"
	I0809 18:49:32.513117  862090 pod_ready.go:81] duration metric: took 383.429857ms waiting for pod "kube-scheduler-ingress-addon-legacy-849795" in "kube-system" namespace to be "Ready" ...
	I0809 18:49:32.513138  862090 pod_ready.go:38] duration metric: took 11.414837573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:49:32.513156  862090 api_server.go:52] waiting for apiserver process to appear ...
	I0809 18:49:32.513216  862090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 18:49:32.524092  862090 api_server.go:72] duration metric: took 26.09672419s to wait for apiserver process to appear ...
	I0809 18:49:32.524131  862090 api_server.go:88] waiting for apiserver healthz status ...
	I0809 18:49:32.524151  862090 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0809 18:49:32.529048  862090 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0809 18:49:32.529887  862090 api_server.go:141] control plane version: v1.18.20
	I0809 18:49:32.529909  862090 api_server.go:131] duration metric: took 5.771779ms to wait for apiserver health ...
	I0809 18:49:32.529917  862090 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 18:49:32.709304  862090 request.go:628] Waited for 179.292962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:49:32.714683  862090 system_pods.go:59] 8 kube-system pods found
	I0809 18:49:32.714714  862090 system_pods.go:61] "coredns-66bff467f8-nnj5q" [152371e2-dba6-4df2-a9dc-fe5e074d4a89] Running
	I0809 18:49:32.714719  862090 system_pods.go:61] "etcd-ingress-addon-legacy-849795" [80fb6df1-d133-43c4-9377-f50c9a997c40] Running
	I0809 18:49:32.714723  862090 system_pods.go:61] "kindnet-ffm4k" [2d9a6617-e033-429e-86be-76a14e803ab0] Running
	I0809 18:49:32.714729  862090 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-849795" [76b6aacb-078d-4479-8cba-2d5656079c61] Running
	I0809 18:49:32.714736  862090 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-849795" [cd88d7b6-8d8e-48a9-ad55-bb54f78b6601] Running
	I0809 18:49:32.714742  862090 system_pods.go:61] "kube-proxy-7hlsb" [8e8d6ad8-5544-443d-9443-909533f76b33] Running
	I0809 18:49:32.714755  862090 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-849795" [af6f5478-a9f7-4b4a-8164-e3e49b6c3e92] Running
	I0809 18:49:32.714766  862090 system_pods.go:61] "storage-provisioner" [fd89e86d-38c2-4542-b56c-6fba363ee633] Running
	I0809 18:49:32.714773  862090 system_pods.go:74] duration metric: took 184.849884ms to wait for pod list to return data ...
	I0809 18:49:32.714784  862090 default_sa.go:34] waiting for default service account to be created ...
	I0809 18:49:32.910206  862090 request.go:628] Waited for 195.343212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0809 18:49:32.912529  862090 default_sa.go:45] found service account: "default"
	I0809 18:49:32.912554  862090 default_sa.go:55] duration metric: took 197.762395ms for default service account to be created ...
	I0809 18:49:32.912562  862090 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 18:49:33.109999  862090 request.go:628] Waited for 197.355592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:49:33.115245  862090 system_pods.go:86] 8 kube-system pods found
	I0809 18:49:33.115273  862090 system_pods.go:89] "coredns-66bff467f8-nnj5q" [152371e2-dba6-4df2-a9dc-fe5e074d4a89] Running
	I0809 18:49:33.115278  862090 system_pods.go:89] "etcd-ingress-addon-legacy-849795" [80fb6df1-d133-43c4-9377-f50c9a997c40] Running
	I0809 18:49:33.115282  862090 system_pods.go:89] "kindnet-ffm4k" [2d9a6617-e033-429e-86be-76a14e803ab0] Running
	I0809 18:49:33.115289  862090 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-849795" [76b6aacb-078d-4479-8cba-2d5656079c61] Running
	I0809 18:49:33.115293  862090 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-849795" [cd88d7b6-8d8e-48a9-ad55-bb54f78b6601] Running
	I0809 18:49:33.115300  862090 system_pods.go:89] "kube-proxy-7hlsb" [8e8d6ad8-5544-443d-9443-909533f76b33] Running
	I0809 18:49:33.115304  862090 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-849795" [af6f5478-a9f7-4b4a-8164-e3e49b6c3e92] Running
	I0809 18:49:33.115308  862090 system_pods.go:89] "storage-provisioner" [fd89e86d-38c2-4542-b56c-6fba363ee633] Running
	I0809 18:49:33.115317  862090 system_pods.go:126] duration metric: took 202.747338ms to wait for k8s-apps to be running ...
	I0809 18:49:33.115324  862090 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 18:49:33.115367  862090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:49:33.127704  862090 system_svc.go:56] duration metric: took 12.367292ms WaitForService to wait for kubelet.
	I0809 18:49:33.127734  862090 kubeadm.go:581] duration metric: took 26.700371737s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 18:49:33.127763  862090 node_conditions.go:102] verifying NodePressure condition ...
	I0809 18:49:33.309346  862090 request.go:628] Waited for 181.486383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0809 18:49:33.312240  862090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 18:49:33.312263  862090 node_conditions.go:123] node cpu capacity is 8
	I0809 18:49:33.312275  862090 node_conditions.go:105] duration metric: took 184.50713ms to run NodePressure ...
	I0809 18:49:33.312286  862090 start.go:228] waiting for startup goroutines ...
	I0809 18:49:33.312292  862090 start.go:233] waiting for cluster config update ...
	I0809 18:49:33.312302  862090 start.go:242] writing updated cluster config ...
	I0809 18:49:33.312592  862090 ssh_runner.go:195] Run: rm -f paused
	I0809 18:49:33.359474  862090 start.go:599] kubectl: 1.27.4, cluster: 1.18.20 (minor skew: 9)
	I0809 18:49:33.361509  862090 out.go:177] 
	W0809 18:49:33.362979  862090 out.go:239] ! /usr/local/bin/kubectl is version 1.27.4, which may have incompatibilities with Kubernetes 1.18.20.
	I0809 18:49:33.364388  862090 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0809 18:49:33.365680  862090 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-849795" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 09 18:52:24 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:24.330606417Z" level=info msg="Started container" PID=4829 containerID=847c60bac8119d65571b27dec8ce1be4bcbee09671a870a94d4c1a33fa596e62 description=default/hello-world-app-5f5d8b66bb-g7569/hello-world-app id=25e95b91-c4fc-4334-a415-12f8b79b544f name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=bfa2354f5fbfb7cfe38b0d46c9982d046bc349a5e6e75180c523bb73d9715ea6
	Aug 09 18:52:34 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:34.863316759Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=f926d0ac-c0e7-46c8-a74c-cbfffe32811f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 09 18:52:38 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:38.864164113Z" level=info msg="Stopping pod sandbox: 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=422ae0f0-b693-483f-98d2-c62916c9a551 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:38 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:38.865118252Z" level=info msg="Stopped pod sandbox: 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=422ae0f0-b693-483f-98d2-c62916c9a551 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:39 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:39.344941538Z" level=info msg="Stopping pod sandbox: 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=3e82e6fe-814a-4e1f-b28a-bc132180ed4c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:39 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:39.345001834Z" level=info msg="Stopped pod sandbox (already stopped): 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=3e82e6fe-814a-4e1f-b28a-bc132180ed4c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:40 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:40.128053732Z" level=info msg="Stopping container: 4f3bdf5fe28fb2953b7c6dcdc00042d2ba26a000dbc69e4eae4407ba36060518 (timeout: 2s)" id=cb39f923-897f-46c0-9ad1-a000cc0b9dac name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 09 18:52:40 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:40.130005225Z" level=info msg="Stopping container: 4f3bdf5fe28fb2953b7c6dcdc00042d2ba26a000dbc69e4eae4407ba36060518 (timeout: 2s)" id=ca179bf3-055d-4bc8-8a05-3068f2ccda52 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 09 18:52:40 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:40.862935235Z" level=info msg="Stopping pod sandbox: 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=2f7cddf0-96da-40b8-bea6-720eb17b828c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:40 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:40.862998279Z" level=info msg="Stopped pod sandbox (already stopped): 68e0a8d2ae5c524cb680a09fcf00777cfc8fccd88bc6138141321bff6f9bd664" id=2f7cddf0-96da-40b8-bea6-720eb17b828c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.137662370Z" level=warning msg="Stopping container 4f3bdf5fe28fb2953b7c6dcdc00042d2ba26a000dbc69e4eae4407ba36060518 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=cb39f923-897f-46c0-9ad1-a000cc0b9dac name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 09 18:52:42 ingress-addon-legacy-849795 conmon[3471]: conmon 4f3bdf5fe28fb2953b7c <ninfo>: container 3483 exited with status 137
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.303687813Z" level=info msg="Stopped container 4f3bdf5fe28fb2953b7c6dcdc00042d2ba26a000dbc69e4eae4407ba36060518: ingress-nginx/ingress-nginx-controller-7fcf777cb7-wzclq/controller" id=cb39f923-897f-46c0-9ad1-a000cc0b9dac name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.303762515Z" level=info msg="Stopped container 4f3bdf5fe28fb2953b7c6dcdc00042d2ba26a000dbc69e4eae4407ba36060518: ingress-nginx/ingress-nginx-controller-7fcf777cb7-wzclq/controller" id=ca179bf3-055d-4bc8-8a05-3068f2ccda52 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.304361965Z" level=info msg="Stopping pod sandbox: 2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298" id=b396474d-30a0-4e81-ae07-e4270928813c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.304374854Z" level=info msg="Stopping pod sandbox: 2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298" id=bd020036-939f-41e9-90f9-c943262f7b24 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.307426376Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-AXISHM5YPOBELZ7K - [0:0]\n:KUBE-HP-KYC2PBGLO4MGEGBU - [0:0]\n-X KUBE-HP-KYC2PBGLO4MGEGBU\n-X KUBE-HP-AXISHM5YPOBELZ7K\nCOMMIT\n"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.308896235Z" level=info msg="Closing host port tcp:80"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.308943819Z" level=info msg="Closing host port tcp:443"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.310036375Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.310054564Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.310219785Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-wzclq Namespace:ingress-nginx ID:2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298 UID:688f33cf-9652-488b-a1bb-f6a8e6a451aa NetNS:/var/run/netns/1c8505fc-2c5a-49a1-bb6f-2e50d7df1410 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.310382215Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-wzclq from CNI network \"kindnet\" (type=ptp)"
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.349211299Z" level=info msg="Stopped pod sandbox: 2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298" id=b396474d-30a0-4e81-ae07-e4270928813c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 09 18:52:42 ingress-addon-legacy-849795 crio[956]: time="2023-08-09 18:52:42.349334064Z" level=info msg="Stopped pod sandbox (already stopped): 2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298" id=bd020036-939f-41e9-90f9-c943262f7b24 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	847c60bac8119       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            23 seconds ago      Running             hello-world-app           0                   bfa2354f5fbfb       hello-world-app-5f5d8b66bb-g7569
	c27d203338b39       docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c                    2 minutes ago       Running             nginx                     0                   a744c704f5b50       nginx
	4f3bdf5fe28fb       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   2049ef92d8b89       ingress-nginx-controller-7fcf777cb7-wzclq
	f981eed5ab0f3       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   3 minutes ago       Exited              patch                     1                   0ad17bccc7a9c       ingress-nginx-admission-patch-6zxwh
	3c5faf8758ebc       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   e49bae3dbf741       ingress-nginx-admission-create-rc87g
	d8694469c779b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   f7121967ca542       coredns-66bff467f8-nnj5q
	2a3f923d4a8bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   2849a9cb5168b       storage-provisioner
	5cc4078bf767f       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   a40bf54de3559       kindnet-ffm4k
	677371f79822e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   f7211fee71821       kube-proxy-7hlsb
	bea9d419ac174       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   a84fa82f3301a       kube-controller-manager-ingress-addon-legacy-849795
	f21133aebcf85       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   5211b18eab3a3       kube-apiserver-ingress-addon-legacy-849795
	59be901be31ad       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   0e0709ceb0334       kube-scheduler-ingress-addon-legacy-849795
	9f9468576d9f7       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   3a8bcedf30ee3       etcd-ingress-addon-legacy-849795
	
	* 
	* ==> coredns [d8694469c779be8456e6244cc0be271f7af3baea5708ef96a43bd68dbd906c6f] <==
	* [INFO] 10.244.0.5:32915 - 24997 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006057277s
	[INFO] 10.244.0.5:51088 - 15602 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005776242s
	[INFO] 10.244.0.5:48822 - 7892 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006434355s
	[INFO] 10.244.0.5:58236 - 16739 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006529736s
	[INFO] 10.244.0.5:32882 - 3092 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006706656s
	[INFO] 10.244.0.5:59831 - 53738 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006574127s
	[INFO] 10.244.0.5:35337 - 51902 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006683362s
	[INFO] 10.244.0.5:48608 - 28327 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006689239s
	[INFO] 10.244.0.5:32915 - 5898 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006666958s
	[INFO] 10.244.0.5:59831 - 13524 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004833918s
	[INFO] 10.244.0.5:48822 - 64287 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005978536s
	[INFO] 10.244.0.5:58236 - 1601 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00587374s
	[INFO] 10.244.0.5:32882 - 41666 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005883435s
	[INFO] 10.244.0.5:32915 - 14149 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00567965s
	[INFO] 10.244.0.5:48608 - 56795 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005715743s
	[INFO] 10.244.0.5:51088 - 12744 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006197869s
	[INFO] 10.244.0.5:35337 - 14804 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005758234s
	[INFO] 10.244.0.5:59831 - 56790 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005237s
	[INFO] 10.244.0.5:48822 - 23099 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052287s
	[INFO] 10.244.0.5:51088 - 57761 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078094s
	[INFO] 10.244.0.5:48608 - 33207 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058066s
	[INFO] 10.244.0.5:58236 - 37235 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000295045s
	[INFO] 10.244.0.5:35337 - 28024 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000151071s
	[INFO] 10.244.0.5:32915 - 25507 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000217263s
	[INFO] 10.244.0.5:32882 - 19208 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057645s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-849795
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-849795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=ingress-addon-legacy-849795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T18_48_51_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:48:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-849795
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:50:20 +0000   Wed, 09 Aug 2023 18:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:50:20 +0000   Wed, 09 Aug 2023 18:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:50:20 +0000   Wed, 09 Aug 2023 18:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:50:20 +0000   Wed, 09 Aug 2023 18:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-849795
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe69081b7a5a4b9498b9067ed5303283
	  System UUID:                d90c8531-948a-4aa8-ad8b-2c4775b14712
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-g7569                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-nnj5q                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m42s
	  kube-system                 etcd-ingress-addon-legacy-849795                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-ffm4k                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m42s
	  kube-system                 kube-apiserver-ingress-addon-legacy-849795             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-849795    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-7hlsb                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-scheduler-ingress-addon-legacy-849795             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m5s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x4 over 4m5s)  kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x5 over 4m5s)  kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x4 over 4m5s)  kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s                kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s                kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s                kubelet     Node ingress-addon-legacy-849795 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m27s                kubelet     Node ingress-addon-legacy-849795 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007359] FS-Cache: O-key=[8] 'bea40f0200000000'
	[  +0.004926] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.006608] FS-Cache: N-cookie d=000000004c2712ea{9p.inode} n=0000000080630bc4
	[  +0.008736] FS-Cache: N-key=[8] 'bea40f0200000000'
	[  +2.843207] FS-Cache: Duplicate cookie detected
	[  +0.004724] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006740] FS-Cache: O-cookie d=00000000bb1401be{9P.session} n=00000000868856b7
	[  +0.007516] FS-Cache: O-key=[10] '34323937313531323330'
	[  +0.005345] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006654] FS-Cache: N-cookie d=00000000bb1401be{9P.session} n=0000000032e17c6a
	[  +0.008918] FS-Cache: N-key=[10] '34323937313531323330'
	[Aug 9 18:50] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +1.019509] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +2.019754] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +4.187595] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[ +16.130450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 18:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	
	* 
	* ==> etcd [9f9468576d9f7216bf44dcaa6cd80d17af72d4626e2b130c8c7f7c5d36cd4756] <==
	* raft2023/08/09 18:48:43 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/09 18:48:43 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-09 18:48:43.785930 W | auth: simple token is not cryptographically signed
	2023-08-09 18:48:43.789572 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-09 18:48:43.790908 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-09 18:48:43.791241 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-09 18:48:43.792926 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-09 18:48:43.792989 I | embed: listening for peers on 192.168.49.2:2380
	2023-08-09 18:48:43.793214 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/09 18:48:43 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/09 18:48:43 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-09 18:48:43.882656 I | etcdserver: published {Name:ingress-addon-legacy-849795 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-09 18:48:43.882740 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-09 18:48:43.882836 I | embed: ready to serve client requests
	2023-08-09 18:48:43.882956 I | embed: ready to serve client requests
	2023-08-09 18:48:43.883132 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-09 18:48:43.883237 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-09 18:48:43.885345 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-09 18:48:43.885442 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  18:52:47 up  2:35,  0 users,  load average: 0.33, 0.87, 2.10
	Linux ingress-addon-legacy-849795 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [5cc4078bf767ff782521211b9c3c91a9875123d43b39e1eb8949a1d9c49a60f5] <==
	* I0809 18:50:39.317874       1 main.go:227] handling current node
	I0809 18:50:49.326635       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:50:49.326661       1 main.go:227] handling current node
	I0809 18:50:59.330325       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:50:59.330355       1 main.go:227] handling current node
	I0809 18:51:09.334076       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:09.334103       1 main.go:227] handling current node
	I0809 18:51:19.337428       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:19.337453       1 main.go:227] handling current node
	I0809 18:51:29.346636       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:29.346664       1 main.go:227] handling current node
	I0809 18:51:39.350694       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:39.350717       1 main.go:227] handling current node
	I0809 18:51:49.362592       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:49.362618       1 main.go:227] handling current node
	I0809 18:51:59.365882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:51:59.365906       1 main.go:227] handling current node
	I0809 18:52:09.369347       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:52:09.369374       1 main.go:227] handling current node
	I0809 18:52:19.374834       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:52:19.374865       1 main.go:227] handling current node
	I0809 18:52:29.379162       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:52:29.379185       1 main.go:227] handling current node
	I0809 18:52:39.382820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0809 18:52:39.382845       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [f21133aebcf8578765536bb15b1dafa009c29fd542a4e47170311fd1766bee5f] <==
	* E0809 18:48:47.617307       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0809 18:48:47.708634       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 18:48:47.709180       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0809 18:48:47.709557       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0809 18:48:47.710076       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 18:48:47.710101       1 cache.go:39] Caches are synced for autoregister controller
	I0809 18:48:48.606930       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0809 18:48:48.606965       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 18:48:48.612690       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0809 18:48:48.615568       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0809 18:48:48.615589       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0809 18:48:48.896314       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 18:48:48.923188       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0809 18:48:48.989711       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0809 18:48:48.990567       1 controller.go:609] quota admission added evaluator for: endpoints
	I0809 18:48:48.993793       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0809 18:48:49.919167       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0809 18:48:50.397239       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0809 18:48:50.577460       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0809 18:48:50.802549       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 18:49:05.548722       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0809 18:49:05.789603       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0809 18:49:34.003326       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0809 18:50:02.194187       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0809 18:52:40.138953       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [bea9d419ac17405ac9fc0d207bc2072bc71b0509722e6017e39530dc99d857a8] <==
	* I0809 18:49:05.993049       1 shared_informer.go:230] Caches are synced for resource quota 
	I0809 18:49:06.040115       1 shared_informer.go:230] Caches are synced for disruption 
	I0809 18:49:06.040143       1 disruption.go:339] Sending events to api server.
	I0809 18:49:06.057676       1 shared_informer.go:230] Caches are synced for stateful set 
	I0809 18:49:06.095441       1 shared_informer.go:230] Caches are synced for attach detach 
	I0809 18:49:06.108220       1 shared_informer.go:230] Caches are synced for service account 
	I0809 18:49:06.132772       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0809 18:49:06.132807       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0809 18:49:06.139007       1 shared_informer.go:230] Caches are synced for namespace 
	I0809 18:49:06.191471       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0809 18:49:06.240490       1 request.go:621] Throttling request took 1.049036058s, request: GET:https://control-plane.minikube.internal:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	I0809 18:49:06.428790       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"641856c4-768d-4235-9de3-1d088462582b", APIVersion:"apps/v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0809 18:49:06.464595       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0406f1f2-510e-4954-a4f5-ee26dd528e8e", APIVersion:"apps/v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-z4gx2
	I0809 18:49:06.856467       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0809 18:49:06.856520       1 shared_informer.go:230] Caches are synced for resource quota 
	I0809 18:49:25.746941       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0809 18:49:33.995007       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"da6cf71d-e39c-42a7-9e3f-9c973c042e22", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0809 18:49:34.000869       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"acb39838-c7ec-41a4-b3a9-03ad78b69a38", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-wzclq
	I0809 18:49:34.063459       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7d2165b5-c9ab-437d-8e17-13ff818bdba6", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-rc87g
	I0809 18:49:34.075214       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cd736868-a32f-494f-853e-a377dc96b0b7", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6zxwh
	I0809 18:49:37.071914       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7d2165b5-c9ab-437d-8e17-13ff818bdba6", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0809 18:49:37.080693       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cd736868-a32f-494f-853e-a377dc96b0b7", APIVersion:"batch/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0809 18:52:22.706978       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"64d5df65-7597-4ebd-9be7-1981721972c3", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0809 18:52:22.712127       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"df454e13-7c31-447e-ba9d-caf31b7f4b9b", APIVersion:"apps/v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-g7569
	E0809 18:52:44.820345       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-xqspp" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [677371f79822e7d7434b187e5a8881e5a080b4a85982b8579e60f82e458a03b9] <==
	* W0809 18:49:06.977238       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0809 18:49:06.986020       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0809 18:49:06.986056       1 server_others.go:186] Using iptables Proxier.
	I0809 18:49:06.986374       1 server.go:583] Version: v1.18.20
	I0809 18:49:07.055155       1 config.go:315] Starting service config controller
	I0809 18:49:07.055197       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0809 18:49:07.055628       1 config.go:133] Starting endpoints config controller
	I0809 18:49:07.055676       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0809 18:49:07.155456       1 shared_informer.go:230] Caches are synced for service config 
	I0809 18:49:07.155840       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [59be901be31ad7598c6a4e3373a4d9a44e322ac337d1414c72b7e0d217178d76] <==
	* I0809 18:48:47.674218       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0809 18:48:47.676048       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:48:47.676073       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0809 18:48:47.676401       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0809 18:48:47.676467       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0809 18:48:47.677275       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 18:48:47.677811       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 18:48:47.678334       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0809 18:48:47.678526       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:48:47.678576       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:48:47.678665       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 18:48:47.678690       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:48:47.678726       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:48:47.678753       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0809 18:48:47.678785       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0809 18:48:47.678861       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0809 18:48:47.679056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 18:48:48.596310       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0809 18:48:48.611512       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 18:48:48.658450       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:48:48.672046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:48:48.757757       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:48:48.758799       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0809 18:48:50.776270       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0809 18:49:07.360864       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Aug 09 18:52:09 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:09.863871    1847 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:09 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:09.863901    1847 pod_workers.go:191] Error syncing pod a96927a8-3ea7-4c3f-b42c-09401c4bfd14 ("kube-ingress-dns-minikube_kube-system(a96927a8-3ea7-4c3f-b42c-09401c4bfd14)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:22.716442    1847 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:22.863684    1847 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:22.863731    1847 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:22.863792    1847 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:22.863834    1847 pod_workers.go:191] Error syncing pod a96927a8-3ea7-4c3f-b42c-09401c4bfd14 ("kube-ingress-dns-minikube_kube-system(a96927a8-3ea7-4c3f-b42c-09401c4bfd14)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 09 18:52:22 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:22.880273    1847 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-mcfbn" (UniqueName: "kubernetes.io/secret/1bac8c63-a6ad-424b-a005-e5e1bf9ff1cc-default-token-mcfbn") pod "hello-world-app-5f5d8b66bb-g7569" (UID: "1bac8c63-a6ad-424b-a005-e5e1bf9ff1cc")
	Aug 09 18:52:23 ingress-addon-legacy-849795 kubelet[1847]: W0809 18:52:23.100572    1847 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/734105a6199ecf57c10c92b965fc3d5070955eba183d0bfc47998fa9a52f481f/crio-bfa2354f5fbfb7cfe38b0d46c9982d046bc349a5e6e75180c523bb73d9715ea6 WatchSource:0}: Error finding container bfa2354f5fbfb7cfe38b0d46c9982d046bc349a5e6e75180c523bb73d9715ea6: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c6e980 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Aug 09 18:52:34 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:34.863696    1847 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:34 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:34.863746    1847 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:34 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:34.863802    1847 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 09 18:52:34 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:34.863848    1847 pod_workers.go:191] Error syncing pod a96927a8-3ea7-4c3f-b42c-09401c4bfd14 ("kube-ingress-dns-minikube_kube-system(a96927a8-3ea7-4c3f-b42c-09401c4bfd14)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 09 18:52:38 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:38.517664    1847 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-wph86" (UniqueName: "kubernetes.io/secret/a96927a8-3ea7-4c3f-b42c-09401c4bfd14-minikube-ingress-dns-token-wph86") pod "a96927a8-3ea7-4c3f-b42c-09401c4bfd14" (UID: "a96927a8-3ea7-4c3f-b42c-09401c4bfd14")
	Aug 09 18:52:38 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:38.519631    1847 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96927a8-3ea7-4c3f-b42c-09401c4bfd14-minikube-ingress-dns-token-wph86" (OuterVolumeSpecName: "minikube-ingress-dns-token-wph86") pod "a96927a8-3ea7-4c3f-b42c-09401c4bfd14" (UID: "a96927a8-3ea7-4c3f-b42c-09401c4bfd14"). InnerVolumeSpecName "minikube-ingress-dns-token-wph86". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:52:38 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:38.617984    1847 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-wph86" (UniqueName: "kubernetes.io/secret/a96927a8-3ea7-4c3f-b42c-09401c4bfd14-minikube-ingress-dns-token-wph86") on node "ingress-addon-legacy-849795" DevicePath ""
	Aug 09 18:52:40 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:40.129391    1847 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wzclq.1779cbbfe6e9cc83", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wzclq", UID:"688f33cf-9652-488b-a1bb-f6a8e6a451aa", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-849795"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12d15fe079b1c83, ext:229765638695, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12d15fe079b1c83, ext:229765638695, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wzclq.1779cbbfe6e9cc83" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 09 18:52:40 ingress-addon-legacy-849795 kubelet[1847]: E0809 18:52:40.132777    1847 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wzclq.1779cbbfe6e9cc83", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wzclq", UID:"688f33cf-9652-488b-a1bb-f6a8e6a451aa", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-849795"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12d15fe079b1c83, ext:229765638695, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12d15fe07bb5865, ext:229767751192, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wzclq.1779cbbfe6e9cc83" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.527466    1847 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-8t7cb" (UniqueName: "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-ingress-nginx-token-8t7cb") pod "688f33cf-9652-488b-a1bb-f6a8e6a451aa" (UID: "688f33cf-9652-488b-a1bb-f6a8e6a451aa")
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.527523    1847 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-webhook-cert") pod "688f33cf-9652-488b-a1bb-f6a8e6a451aa" (UID: "688f33cf-9652-488b-a1bb-f6a8e6a451aa")
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.529532    1847 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "688f33cf-9652-488b-a1bb-f6a8e6a451aa" (UID: "688f33cf-9652-488b-a1bb-f6a8e6a451aa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.529732    1847 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-ingress-nginx-token-8t7cb" (OuterVolumeSpecName: "ingress-nginx-token-8t7cb") pod "688f33cf-9652-488b-a1bb-f6a8e6a451aa" (UID: "688f33cf-9652-488b-a1bb-f6a8e6a451aa"). InnerVolumeSpecName "ingress-nginx-token-8t7cb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.627848    1847 reconciler.go:319] Volume detached for volume "ingress-nginx-token-8t7cb" (UniqueName: "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-ingress-nginx-token-8t7cb") on node "ingress-addon-legacy-849795" DevicePath ""
	Aug 09 18:52:42 ingress-addon-legacy-849795 kubelet[1847]: I0809 18:52:42.627890    1847 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/688f33cf-9652-488b-a1bb-f6a8e6a451aa-webhook-cert") on node "ingress-addon-legacy-849795" DevicePath ""
	Aug 09 18:52:43 ingress-addon-legacy-849795 kubelet[1847]: W0809 18:52:43.343737    1847 pod_container_deletor.go:77] Container "2049ef92d8b89d8ee31d4f2c7d7137747dab14f2079c4f700105040eb1cdf298" not found in pod's containers
	
	* 
	* ==> storage-provisioner [2a3f923d4a8bf6f2850ac1db6e627391ae61aa22f2e9340398a8f42e568cefea] <==
	* I0809 18:49:24.220203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0809 18:49:24.228236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0809 18:49:24.228310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0809 18:49:24.233625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0809 18:49:24.233692       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66d90a06-deb3-4ec1-b724-83bbc393608c", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-849795_8ebc28dd-5b12-4cfc-97d5-8a2bd6905dc1 became leader
	I0809 18:49:24.233792       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-849795_8ebc28dd-5b12-4cfc-97d5-8a2bd6905dc1!
	I0809 18:49:24.334632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-849795_8ebc28dd-5b12-4cfc-97d5-8a2bd6905dc1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-849795 -n ingress-addon-legacy-849795
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-849795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- sh -c "ping -c 1 192.168.58.1": exit status 1 (164.511953ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-jxlzc): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- sh -c "ping -c 1 192.168.58.1": exit status 1 (170.658737ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-wvdrx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-814696
helpers_test.go:235: (dbg) docker inspect multinode-814696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7",
	        "Created": "2023-08-09T18:57:43.610125212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 908515,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T18:57:43.881640454Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51eee4927f7e218e70017d38db072c77f0b6036bbfe389eac8043694e7529d58",
	        "ResolvConfPath": "/var/lib/docker/containers/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/hosts",
	        "LogPath": "/var/lib/docker/containers/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7-json.log",
	        "Name": "/multinode-814696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-814696:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-814696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f7bee930b416b8241b93fa3222b5d394ae991c6e19ecd305a83da0f25347d5f6-init/diff:/var/lib/docker/overlay2/dffcbda35d4e6780372e77e03c9f976a612c164e3ac348da817dd7b6996e96fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7bee930b416b8241b93fa3222b5d394ae991c6e19ecd305a83da0f25347d5f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7bee930b416b8241b93fa3222b5d394ae991c6e19ecd305a83da0f25347d5f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7bee930b416b8241b93fa3222b5d394ae991c6e19ecd305a83da0f25347d5f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-814696",
	                "Source": "/var/lib/docker/volumes/multinode-814696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-814696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-814696",
	                "name.minikube.sigs.k8s.io": "multinode-814696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02f5c1b60654b6cb3596945565d4792b2d391ce235c0132e2482a74f71665255",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/02f5c1b60654",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-814696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8ea453b976d1",
	                        "multinode-814696"
	                    ],
	                    "NetworkID": "f5f975ef181d39f80d876826fcad78848add9d611bb3e4915a047f7de531818f",
	                    "EndpointID": "a00c982c3fbb78c5fe79eb32d982eeeac2823847344b5c18fe3a5b7459119e7f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-814696 -n multinode-814696
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-814696 logs -n 25: (1.271142677s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-838549                           | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-838549 ssh -- ls                    | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-819631                           | mount-start-1-819631 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-838549 ssh -- ls                    | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-838549                           | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	| start   | -p mount-start-2-838549                           | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	| ssh     | mount-start-2-838549 ssh -- ls                    | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-838549                           | mount-start-2-838549 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	| delete  | -p mount-start-1-819631                           | mount-start-1-819631 | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:57 UTC |
	| start   | -p multinode-814696                               | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:57 UTC | 09 Aug 23 18:59 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- apply -f                   | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- rollout                    | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- get pods -o                | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- get pods -o                | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-jxlzc --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-wvdrx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-jxlzc --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-wvdrx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-jxlzc -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-wvdrx -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- get pods -o                | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-jxlzc                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC |                     |
	|         | busybox-67b7f59bb-jxlzc -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC | 09 Aug 23 18:59 UTC |
	|         | busybox-67b7f59bb-wvdrx                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-814696 -- exec                       | multinode-814696     | jenkins | v1.31.1 | 09 Aug 23 18:59 UTC |                     |
	|         | busybox-67b7f59bb-wvdrx -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:57:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:57:37.739290  907909 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:57:37.739430  907909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:57:37.739439  907909 out.go:309] Setting ErrFile to fd 2...
	I0809 18:57:37.739443  907909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:57:37.739667  907909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 18:57:37.740293  907909 out.go:303] Setting JSON to false
	I0809 18:57:37.741611  907909 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9613,"bootTime":1691597845,"procs":690,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:57:37.741675  907909 start.go:138] virtualization: kvm guest
	I0809 18:57:37.744040  907909 out.go:177] * [multinode-814696] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:57:37.746108  907909 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 18:57:37.746125  907909 notify.go:220] Checking for updates...
	I0809 18:57:37.747785  907909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:57:37.749664  907909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:57:37.751052  907909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:57:37.752458  907909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 18:57:37.753875  907909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 18:57:37.755365  907909 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:57:37.778440  907909 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:57:37.778567  907909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:57:37.829617  907909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-09 18:57:37.821118968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:57:37.829724  907909 docker.go:294] overlay module found
	I0809 18:57:37.831513  907909 out.go:177] * Using the docker driver based on user configuration
	I0809 18:57:37.832907  907909 start.go:298] selected driver: docker
	I0809 18:57:37.832922  907909 start.go:901] validating driver "docker" against <nil>
	I0809 18:57:37.832934  907909 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 18:57:37.833653  907909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:57:37.885189  907909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-09 18:57:37.877106985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:57:37.885422  907909 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 18:57:37.885632  907909 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 18:57:37.887133  907909 out.go:177] * Using Docker driver with root privileges
	I0809 18:57:37.888308  907909 cni.go:84] Creating CNI manager for ""
	I0809 18:57:37.888320  907909 cni.go:136] 0 nodes found, recommending kindnet
	I0809 18:57:37.888331  907909 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 18:57:37.888348  907909 start_flags.go:319] config:
	{Name:multinode-814696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:57:37.889810  907909 out.go:177] * Starting control plane node multinode-814696 in cluster multinode-814696
	I0809 18:57:37.891056  907909 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:57:37.892336  907909 out.go:177] * Pulling base image ...
	I0809 18:57:37.893524  907909 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:57:37.893554  907909 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:57:37.893565  907909 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 18:57:37.893582  907909 cache.go:57] Caching tarball of preloaded images
	I0809 18:57:37.893673  907909 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 18:57:37.893686  907909 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 18:57:37.894005  907909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json ...
	I0809 18:57:37.894029  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json: {Name:mk50c80e3e7d4bcaafa3a3dc1e2742ef8a7f8524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:37.908847  907909 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 18:57:37.908874  907909 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 18:57:37.908894  907909 cache.go:195] Successfully downloaded all kic artifacts
	I0809 18:57:37.908922  907909 start.go:365] acquiring machines lock for multinode-814696: {Name:mk8821ca51834a5a8af689498b47e0dc5afb5bb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 18:57:37.909020  907909 start.go:369] acquired machines lock for "multinode-814696" in 77.895µs
	I0809 18:57:37.909050  907909 start.go:93] Provisioning new machine with config: &{Name:multinode-814696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:57:37.909125  907909 start.go:125] createHost starting for "" (driver="docker")
	I0809 18:57:37.911740  907909 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0809 18:57:37.912034  907909 start.go:159] libmachine.API.Create for "multinode-814696" (driver="docker")
	I0809 18:57:37.912078  907909 client.go:168] LocalClient.Create starting
	I0809 18:57:37.912161  907909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 18:57:37.912206  907909 main.go:141] libmachine: Decoding PEM data...
	I0809 18:57:37.912234  907909 main.go:141] libmachine: Parsing certificate...
	I0809 18:57:37.912322  907909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 18:57:37.912350  907909 main.go:141] libmachine: Decoding PEM data...
	I0809 18:57:37.912364  907909 main.go:141] libmachine: Parsing certificate...
	I0809 18:57:37.912810  907909 cli_runner.go:164] Run: docker network inspect multinode-814696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0809 18:57:37.928016  907909 cli_runner.go:211] docker network inspect multinode-814696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0809 18:57:37.928094  907909 network_create.go:281] running [docker network inspect multinode-814696] to gather additional debugging logs...
	I0809 18:57:37.928115  907909 cli_runner.go:164] Run: docker network inspect multinode-814696
	W0809 18:57:37.943474  907909 cli_runner.go:211] docker network inspect multinode-814696 returned with exit code 1
	I0809 18:57:37.943510  907909 network_create.go:284] error running [docker network inspect multinode-814696]: docker network inspect multinode-814696: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-814696 not found
	I0809 18:57:37.943531  907909 network_create.go:286] output of [docker network inspect multinode-814696]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-814696 not found
	
	** /stderr **
	I0809 18:57:37.943597  907909 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:57:37.959204  907909 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-29989c4702eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ad:8a:31:88} reservation:<nil>}
	I0809 18:57:37.959857  907909 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001458d00}
	I0809 18:57:37.959888  907909 network_create.go:123] attempt to create docker network multinode-814696 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0809 18:57:37.959931  907909 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-814696 multinode-814696
	I0809 18:57:38.011581  907909 network_create.go:107] docker network multinode-814696 192.168.58.0/24 created
	I0809 18:57:38.011618  907909 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-814696" container
	I0809 18:57:38.011711  907909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 18:57:38.027423  907909 cli_runner.go:164] Run: docker volume create multinode-814696 --label name.minikube.sigs.k8s.io=multinode-814696 --label created_by.minikube.sigs.k8s.io=true
	I0809 18:57:38.044180  907909 oci.go:103] Successfully created a docker volume multinode-814696
	I0809 18:57:38.044284  907909 cli_runner.go:164] Run: docker run --rm --name multinode-814696-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814696 --entrypoint /usr/bin/test -v multinode-814696:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 18:57:38.583412  907909 oci.go:107] Successfully prepared a docker volume multinode-814696
	I0809 18:57:38.583495  907909 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:57:38.583521  907909 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 18:57:38.583602  907909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-814696:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 18:57:43.545401  907909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-814696:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.9617529s)
	I0809 18:57:43.545465  907909 kic.go:199] duration metric: took 4.961938 seconds to extract preloaded images to volume
	W0809 18:57:43.545622  907909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0809 18:57:43.545726  907909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0809 18:57:43.595669  907909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-814696 --name multinode-814696 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814696 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-814696 --network multinode-814696 --ip 192.168.58.2 --volume multinode-814696:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 18:57:43.889109  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Running}}
	I0809 18:57:43.907226  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:57:43.923889  907909 cli_runner.go:164] Run: docker exec multinode-814696 stat /var/lib/dpkg/alternatives/iptables
	I0809 18:57:43.989010  907909 oci.go:144] the created container "multinode-814696" has a running status.
	I0809 18:57:43.989054  907909 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa...
	I0809 18:57:44.103141  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0809 18:57:44.103187  907909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0809 18:57:44.123225  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:57:44.139787  907909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0809 18:57:44.139824  907909 kic_runner.go:114] Args: [docker exec --privileged multinode-814696 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0809 18:57:44.215335  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:57:44.232068  907909 machine.go:88] provisioning docker machine ...
	I0809 18:57:44.232114  907909 ubuntu.go:169] provisioning hostname "multinode-814696"
	I0809 18:57:44.232183  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:44.251172  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:57:44.251924  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0809 18:57:44.251955  907909 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-814696 && echo "multinode-814696" | sudo tee /etc/hostname
	I0809 18:57:44.252668  907909 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33690->127.0.0.1:33482: read: connection reset by peer
	I0809 18:57:47.402637  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-814696
	
	I0809 18:57:47.402723  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:47.420239  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:57:47.420814  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0809 18:57:47.420843  907909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-814696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-814696/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-814696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 18:57:47.555944  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 18:57:47.555972  907909 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 18:57:47.556027  907909 ubuntu.go:177] setting up certificates
	I0809 18:57:47.556043  907909 provision.go:83] configureAuth start
	I0809 18:57:47.556109  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696
	I0809 18:57:47.572630  907909 provision.go:138] copyHostCerts
	I0809 18:57:47.572672  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:57:47.572705  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 18:57:47.572715  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:57:47.572789  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 18:57:47.572901  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:57:47.572925  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 18:57:47.572931  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:57:47.572961  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 18:57:47.573017  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:57:47.573035  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 18:57:47.573038  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:57:47.573058  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 18:57:47.573118  907909 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.multinode-814696 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-814696]
	I0809 18:57:47.860663  907909 provision.go:172] copyRemoteCerts
	I0809 18:57:47.860733  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 18:57:47.860768  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:47.877230  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:57:47.980292  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0809 18:57:47.980363  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 18:57:48.002312  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0809 18:57:48.002375  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0809 18:57:48.024871  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0809 18:57:48.024953  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 18:57:48.047194  907909 provision.go:86] duration metric: configureAuth took 491.127319ms
	I0809 18:57:48.047224  907909 ubuntu.go:193] setting minikube options for container-runtime
	I0809 18:57:48.047440  907909 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:57:48.047582  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:48.064029  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:57:48.064642  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0809 18:57:48.064667  907909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 18:57:48.286005  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 18:57:48.286043  907909 machine.go:91] provisioned docker machine in 4.053941129s
	I0809 18:57:48.286053  907909 client.go:171] LocalClient.Create took 10.373962271s
	I0809 18:57:48.286073  907909 start.go:167] duration metric: libmachine.API.Create for "multinode-814696" took 10.374042791s
	I0809 18:57:48.286083  907909 start.go:300] post-start starting for "multinode-814696" (driver="docker")
	I0809 18:57:48.286093  907909 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 18:57:48.286166  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 18:57:48.286221  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:48.302447  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:57:48.400709  907909 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 18:57:48.403778  907909 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0809 18:57:48.403802  907909 command_runner.go:130] > NAME="Ubuntu"
	I0809 18:57:48.403810  907909 command_runner.go:130] > VERSION_ID="22.04"
	I0809 18:57:48.403817  907909 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0809 18:57:48.403822  907909 command_runner.go:130] > VERSION_CODENAME=jammy
	I0809 18:57:48.403830  907909 command_runner.go:130] > ID=ubuntu
	I0809 18:57:48.403837  907909 command_runner.go:130] > ID_LIKE=debian
	I0809 18:57:48.403841  907909 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0809 18:57:48.403846  907909 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0809 18:57:48.403854  907909 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0809 18:57:48.403863  907909 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0809 18:57:48.403869  907909 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0809 18:57:48.403946  907909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 18:57:48.403968  907909 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 18:57:48.403979  907909 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 18:57:48.403987  907909 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 18:57:48.403996  907909 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 18:57:48.404052  907909 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 18:57:48.404119  907909 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 18:57:48.404127  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /etc/ssl/certs/8234342.pem
	I0809 18:57:48.404213  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 18:57:48.412270  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:57:48.433960  907909 start.go:303] post-start completed in 147.859014ms
	I0809 18:57:48.434315  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696
	I0809 18:57:48.450542  907909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json ...
	I0809 18:57:48.450818  907909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 18:57:48.450876  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:48.467933  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:57:48.560156  907909 command_runner.go:130] > 22%!
	(MISSING)I0809 18:57:48.560330  907909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 18:57:48.564488  907909 command_runner.go:130] > 229G
	I0809 18:57:48.564521  907909 start.go:128] duration metric: createHost completed in 10.655387824s
	I0809 18:57:48.564532  907909 start.go:83] releasing machines lock for "multinode-814696", held for 10.65550188s
	I0809 18:57:48.564598  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696
	I0809 18:57:48.580555  907909 ssh_runner.go:195] Run: cat /version.json
	I0809 18:57:48.580601  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:48.580632  907909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 18:57:48.580698  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:57:48.597188  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:57:48.598425  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:57:48.687058  907909 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40-1690799191-16971", "minikube_version": "v1.31.1", "commit": "c9a9d1e164f9532f3819e585f7a0abf3ece27773"}
	I0809 18:57:48.687193  907909 ssh_runner.go:195] Run: systemctl --version
	I0809 18:57:48.777504  907909 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0809 18:57:48.777557  907909 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0809 18:57:48.777587  907909 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0809 18:57:48.777673  907909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 18:57:48.914457  907909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 18:57:48.918599  907909 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0809 18:57:48.918637  907909 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0809 18:57:48.918648  907909 command_runner.go:130] > Device: 37h/55d	Inode: 797218      Links: 1
	I0809 18:57:48.918659  907909 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:57:48.918668  907909 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0809 18:57:48.918673  907909 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0809 18:57:48.918679  907909 command_runner.go:130] > Change: 2023-08-09 18:39:26.869078805 +0000
	I0809 18:57:48.918688  907909 command_runner.go:130] >  Birth: 2023-08-09 18:39:26.869078805 +0000
	I0809 18:57:48.918753  907909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:57:48.936309  907909 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 18:57:48.936407  907909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:57:48.963768  907909 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0809 18:57:48.963795  907909 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0809 18:57:48.963803  907909 start.go:466] detecting cgroup driver to use...
	I0809 18:57:48.963831  907909 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 18:57:48.963869  907909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 18:57:48.977977  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 18:57:48.988337  907909 docker.go:196] disabling cri-docker service (if available) ...
	I0809 18:57:48.988391  907909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 18:57:49.000832  907909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 18:57:49.013990  907909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 18:57:49.091717  907909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 18:57:49.177037  907909 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0809 18:57:49.177075  907909 docker.go:212] disabling docker service ...
	I0809 18:57:49.177121  907909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 18:57:49.194814  907909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 18:57:49.205111  907909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 18:57:49.215403  907909 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0809 18:57:49.276272  907909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 18:57:49.286874  907909 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0809 18:57:49.356149  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 18:57:49.366511  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 18:57:49.381142  907909 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0809 18:57:49.381180  907909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0809 18:57:49.381232  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:57:49.390265  907909 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 18:57:49.390319  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:57:49.399103  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:57:49.408066  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:57:49.417006  907909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 18:57:49.424999  907909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 18:57:49.431859  907909 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0809 18:57:49.432411  907909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 18:57:49.439781  907909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 18:57:49.515944  907909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 18:57:49.617767  907909 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 18:57:49.617840  907909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 18:57:49.621133  907909 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0809 18:57:49.621157  907909 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0809 18:57:49.621166  907909 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0809 18:57:49.621176  907909 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:57:49.621183  907909 command_runner.go:130] > Access: 2023-08-09 18:57:49.603932831 +0000
	I0809 18:57:49.621192  907909 command_runner.go:130] > Modify: 2023-08-09 18:57:49.603932831 +0000
	I0809 18:57:49.621200  907909 command_runner.go:130] > Change: 2023-08-09 18:57:49.603932831 +0000
	I0809 18:57:49.621205  907909 command_runner.go:130] >  Birth: -
	I0809 18:57:49.621229  907909 start.go:534] Will wait 60s for crictl version
	I0809 18:57:49.621276  907909 ssh_runner.go:195] Run: which crictl
	I0809 18:57:49.624252  907909 command_runner.go:130] > /usr/bin/crictl
	I0809 18:57:49.624351  907909 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 18:57:49.655796  907909 command_runner.go:130] > Version:  0.1.0
	I0809 18:57:49.655825  907909 command_runner.go:130] > RuntimeName:  cri-o
	I0809 18:57:49.655837  907909 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0809 18:57:49.655842  907909 command_runner.go:130] > RuntimeApiVersion:  v1
	I0809 18:57:49.657948  907909 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 18:57:49.658032  907909 ssh_runner.go:195] Run: crio --version
	I0809 18:57:49.690468  907909 command_runner.go:130] > crio version 1.24.6
	I0809 18:57:49.690491  907909 command_runner.go:130] > Version:          1.24.6
	I0809 18:57:49.690498  907909 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0809 18:57:49.690502  907909 command_runner.go:130] > GitTreeState:     clean
	I0809 18:57:49.690508  907909 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0809 18:57:49.690513  907909 command_runner.go:130] > GoVersion:        go1.18.2
	I0809 18:57:49.690517  907909 command_runner.go:130] > Compiler:         gc
	I0809 18:57:49.690524  907909 command_runner.go:130] > Platform:         linux/amd64
	I0809 18:57:49.690559  907909 command_runner.go:130] > Linkmode:         dynamic
	I0809 18:57:49.690578  907909 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0809 18:57:49.690586  907909 command_runner.go:130] > SeccompEnabled:   true
	I0809 18:57:49.690594  907909 command_runner.go:130] > AppArmorEnabled:  false
	I0809 18:57:49.692297  907909 ssh_runner.go:195] Run: crio --version
	I0809 18:57:49.726399  907909 command_runner.go:130] > crio version 1.24.6
	I0809 18:57:49.726419  907909 command_runner.go:130] > Version:          1.24.6
	I0809 18:57:49.726426  907909 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0809 18:57:49.726431  907909 command_runner.go:130] > GitTreeState:     clean
	I0809 18:57:49.726458  907909 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0809 18:57:49.726465  907909 command_runner.go:130] > GoVersion:        go1.18.2
	I0809 18:57:49.726471  907909 command_runner.go:130] > Compiler:         gc
	I0809 18:57:49.726477  907909 command_runner.go:130] > Platform:         linux/amd64
	I0809 18:57:49.726485  907909 command_runner.go:130] > Linkmode:         dynamic
	I0809 18:57:49.726504  907909 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0809 18:57:49.726510  907909 command_runner.go:130] > SeccompEnabled:   true
	I0809 18:57:49.726517  907909 command_runner.go:130] > AppArmorEnabled:  false
	I0809 18:57:49.728521  907909 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0809 18:57:49.730028  907909 cli_runner.go:164] Run: docker network inspect multinode-814696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:57:49.746169  907909 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0809 18:57:49.749762  907909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:57:49.759821  907909 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:57:49.759879  907909 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:57:49.807251  907909 command_runner.go:130] > {
	I0809 18:57:49.807274  907909 command_runner.go:130] >   "images": [
	I0809 18:57:49.807279  907909 command_runner.go:130] >     {
	I0809 18:57:49.807291  907909 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0809 18:57:49.807297  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807307  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0809 18:57:49.807312  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807318  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.807340  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0809 18:57:49.807357  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0809 18:57:49.807367  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807375  907909 command_runner.go:130] >       "size": "65249302",
	I0809 18:57:49.807386  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.807396  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.807412  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.807422  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.807428  907909 command_runner.go:130] >     },
	I0809 18:57:49.807437  907909 command_runner.go:130] >     {
	I0809 18:57:49.807449  907909 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0809 18:57:49.807459  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807472  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0809 18:57:49.807481  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807489  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.807506  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0809 18:57:49.807523  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0809 18:57:49.807532  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807549  907909 command_runner.go:130] >       "size": "31470524",
	I0809 18:57:49.807559  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.807570  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.807580  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.807589  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.807598  907909 command_runner.go:130] >     },
	I0809 18:57:49.807605  907909 command_runner.go:130] >     {
	I0809 18:57:49.807619  907909 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0809 18:57:49.807629  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807654  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0809 18:57:49.807664  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807671  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.807685  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0809 18:57:49.807700  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0809 18:57:49.807707  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807711  907909 command_runner.go:130] >       "size": "53621675",
	I0809 18:57:49.807715  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.807721  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.807729  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.807733  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.807739  907909 command_runner.go:130] >     },
	I0809 18:57:49.807743  907909 command_runner.go:130] >     {
	I0809 18:57:49.807751  907909 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0809 18:57:49.807757  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807762  907909 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0809 18:57:49.807769  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807773  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.807782  907909 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0809 18:57:49.807791  907909 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0809 18:57:49.807804  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807810  907909 command_runner.go:130] >       "size": "297083935",
	I0809 18:57:49.807814  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.807818  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.807824  907909 command_runner.go:130] >       },
	I0809 18:57:49.807828  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.807836  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.807845  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.807850  907909 command_runner.go:130] >     },
	I0809 18:57:49.807854  907909 command_runner.go:130] >     {
	I0809 18:57:49.807862  907909 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0809 18:57:49.807868  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807873  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0809 18:57:49.807879  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807883  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.807892  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0809 18:57:49.807901  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0809 18:57:49.807907  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807912  907909 command_runner.go:130] >       "size": "122078160",
	I0809 18:57:49.807917  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.807923  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.807929  907909 command_runner.go:130] >       },
	I0809 18:57:49.807933  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.807940  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.807944  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.807953  907909 command_runner.go:130] >     },
	I0809 18:57:49.807959  907909 command_runner.go:130] >     {
	I0809 18:57:49.807965  907909 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0809 18:57:49.807971  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.807983  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0809 18:57:49.807989  907909 command_runner.go:130] >       ],
	I0809 18:57:49.807993  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.808001  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0809 18:57:49.808011  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0809 18:57:49.808017  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808021  907909 command_runner.go:130] >       "size": "113931062",
	I0809 18:57:49.808027  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.808032  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.808037  907909 command_runner.go:130] >       },
	I0809 18:57:49.808041  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.808047  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.808051  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.808057  907909 command_runner.go:130] >     },
	I0809 18:57:49.808063  907909 command_runner.go:130] >     {
	I0809 18:57:49.808071  907909 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0809 18:57:49.808077  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.808082  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0809 18:57:49.808086  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808092  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.808099  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0809 18:57:49.808108  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0809 18:57:49.808114  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808118  907909 command_runner.go:130] >       "size": "72714135",
	I0809 18:57:49.808124  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.808129  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.808135  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.808139  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.808147  907909 command_runner.go:130] >     },
	I0809 18:57:49.808151  907909 command_runner.go:130] >     {
	I0809 18:57:49.808159  907909 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0809 18:57:49.808165  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.808172  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0809 18:57:49.808178  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808182  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.808228  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0809 18:57:49.808245  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0809 18:57:49.808251  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808257  907909 command_runner.go:130] >       "size": "59814710",
	I0809 18:57:49.808264  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.808273  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.808283  907909 command_runner.go:130] >       },
	I0809 18:57:49.808303  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.808313  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.808320  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.808326  907909 command_runner.go:130] >     },
	I0809 18:57:49.808336  907909 command_runner.go:130] >     {
	I0809 18:57:49.808348  907909 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0809 18:57:49.808357  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.808365  907909 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0809 18:57:49.808379  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808390  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.808404  907909 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0809 18:57:49.808420  907909 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0809 18:57:49.808429  907909 command_runner.go:130] >       ],
	I0809 18:57:49.808439  907909 command_runner.go:130] >       "size": "750414",
	I0809 18:57:49.808448  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.808458  907909 command_runner.go:130] >         "value": "65535"
	I0809 18:57:49.808467  907909 command_runner.go:130] >       },
	I0809 18:57:49.808476  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.808486  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.808497  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.808506  907909 command_runner.go:130] >     }
	I0809 18:57:49.808512  907909 command_runner.go:130] >   ]
	I0809 18:57:49.808521  907909 command_runner.go:130] > }
	I0809 18:57:49.810019  907909 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 18:57:49.810038  907909 crio.go:415] Images already preloaded, skipping extraction
	I0809 18:57:49.810077  907909 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 18:57:49.842217  907909 command_runner.go:130] > {
	I0809 18:57:49.842255  907909 command_runner.go:130] >   "images": [
	I0809 18:57:49.842264  907909 command_runner.go:130] >     {
	I0809 18:57:49.842276  907909 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0809 18:57:49.842285  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842291  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0809 18:57:49.842297  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842302  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842312  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0809 18:57:49.842321  907909 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0809 18:57:49.842326  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842332  907909 command_runner.go:130] >       "size": "65249302",
	I0809 18:57:49.842338  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.842342  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842366  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842376  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842379  907909 command_runner.go:130] >     },
	I0809 18:57:49.842382  907909 command_runner.go:130] >     {
	I0809 18:57:49.842388  907909 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0809 18:57:49.842392  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842397  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0809 18:57:49.842400  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842404  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842411  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0809 18:57:49.842418  907909 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0809 18:57:49.842421  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842428  907909 command_runner.go:130] >       "size": "31470524",
	I0809 18:57:49.842432  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.842437  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842440  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842446  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842449  907909 command_runner.go:130] >     },
	I0809 18:57:49.842459  907909 command_runner.go:130] >     {
	I0809 18:57:49.842467  907909 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0809 18:57:49.842471  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842476  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0809 18:57:49.842482  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842485  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842492  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0809 18:57:49.842502  907909 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0809 18:57:49.842506  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842512  907909 command_runner.go:130] >       "size": "53621675",
	I0809 18:57:49.842516  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.842520  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842525  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842529  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842535  907909 command_runner.go:130] >     },
	I0809 18:57:49.842538  907909 command_runner.go:130] >     {
	I0809 18:57:49.842544  907909 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0809 18:57:49.842550  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842557  907909 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0809 18:57:49.842563  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842567  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842576  907909 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0809 18:57:49.842582  907909 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0809 18:57:49.842593  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842597  907909 command_runner.go:130] >       "size": "297083935",
	I0809 18:57:49.842603  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.842607  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.842610  907909 command_runner.go:130] >       },
	I0809 18:57:49.842615  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842620  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842624  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842630  907909 command_runner.go:130] >     },
	I0809 18:57:49.842633  907909 command_runner.go:130] >     {
	I0809 18:57:49.842639  907909 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0809 18:57:49.842645  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842650  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0809 18:57:49.842660  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842666  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842674  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0809 18:57:49.842683  907909 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0809 18:57:49.842687  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842691  907909 command_runner.go:130] >       "size": "122078160",
	I0809 18:57:49.842695  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.842699  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.842702  907909 command_runner.go:130] >       },
	I0809 18:57:49.842707  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842713  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842717  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842723  907909 command_runner.go:130] >     },
	I0809 18:57:49.842726  907909 command_runner.go:130] >     {
	I0809 18:57:49.842732  907909 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0809 18:57:49.842739  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842744  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0809 18:57:49.842749  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842755  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842764  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0809 18:57:49.842773  907909 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0809 18:57:49.842779  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842783  907909 command_runner.go:130] >       "size": "113931062",
	I0809 18:57:49.842786  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.842790  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.842794  907909 command_runner.go:130] >       },
	I0809 18:57:49.842797  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842801  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842808  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842811  907909 command_runner.go:130] >     },
	I0809 18:57:49.842817  907909 command_runner.go:130] >     {
	I0809 18:57:49.842823  907909 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0809 18:57:49.842829  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842834  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0809 18:57:49.842840  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842843  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842853  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0809 18:57:49.842862  907909 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0809 18:57:49.842868  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842872  907909 command_runner.go:130] >       "size": "72714135",
	I0809 18:57:49.842875  907909 command_runner.go:130] >       "uid": null,
	I0809 18:57:49.842879  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842883  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842887  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842890  907909 command_runner.go:130] >     },
	I0809 18:57:49.842894  907909 command_runner.go:130] >     {
	I0809 18:57:49.842903  907909 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0809 18:57:49.842910  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.842914  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0809 18:57:49.842920  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842924  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.842944  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0809 18:57:49.842954  907909 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0809 18:57:49.842957  907909 command_runner.go:130] >       ],
	I0809 18:57:49.842964  907909 command_runner.go:130] >       "size": "59814710",
	I0809 18:57:49.842970  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.842974  907909 command_runner.go:130] >         "value": "0"
	I0809 18:57:49.842977  907909 command_runner.go:130] >       },
	I0809 18:57:49.842981  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.842987  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.842991  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.842997  907909 command_runner.go:130] >     },
	I0809 18:57:49.843000  907909 command_runner.go:130] >     {
	I0809 18:57:49.843006  907909 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0809 18:57:49.843013  907909 command_runner.go:130] >       "repoTags": [
	I0809 18:57:49.843017  907909 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0809 18:57:49.843023  907909 command_runner.go:130] >       ],
	I0809 18:57:49.843027  907909 command_runner.go:130] >       "repoDigests": [
	I0809 18:57:49.843035  907909 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0809 18:57:49.843045  907909 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0809 18:57:49.843052  907909 command_runner.go:130] >       ],
	I0809 18:57:49.843056  907909 command_runner.go:130] >       "size": "750414",
	I0809 18:57:49.843065  907909 command_runner.go:130] >       "uid": {
	I0809 18:57:49.843069  907909 command_runner.go:130] >         "value": "65535"
	I0809 18:57:49.843075  907909 command_runner.go:130] >       },
	I0809 18:57:49.843079  907909 command_runner.go:130] >       "username": "",
	I0809 18:57:49.843083  907909 command_runner.go:130] >       "spec": null,
	I0809 18:57:49.843089  907909 command_runner.go:130] >       "pinned": false
	I0809 18:57:49.843092  907909 command_runner.go:130] >     }
	I0809 18:57:49.843095  907909 command_runner.go:130] >   ]
	I0809 18:57:49.843099  907909 command_runner.go:130] > }
	I0809 18:57:49.843224  907909 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 18:57:49.843235  907909 cache_images.go:84] Images are preloaded, skipping loading
	I0809 18:57:49.843290  907909 ssh_runner.go:195] Run: crio config
	I0809 18:57:49.880622  907909 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0809 18:57:49.880653  907909 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0809 18:57:49.880662  907909 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0809 18:57:49.880667  907909 command_runner.go:130] > #
	I0809 18:57:49.880678  907909 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0809 18:57:49.880688  907909 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0809 18:57:49.880707  907909 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0809 18:57:49.880728  907909 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0809 18:57:49.880738  907909 command_runner.go:130] > # reload'.
	I0809 18:57:49.880749  907909 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0809 18:57:49.880762  907909 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0809 18:57:49.880773  907909 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0809 18:57:49.880783  907909 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0809 18:57:49.880791  907909 command_runner.go:130] > [crio]
	I0809 18:57:49.880803  907909 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0809 18:57:49.880815  907909 command_runner.go:130] > # containers images, in this directory.
	I0809 18:57:49.880833  907909 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0809 18:57:49.880847  907909 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0809 18:57:49.880861  907909 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0809 18:57:49.880874  907909 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0809 18:57:49.880888  907909 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0809 18:57:49.880896  907909 command_runner.go:130] > # storage_driver = "vfs"
	I0809 18:57:49.880909  907909 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0809 18:57:49.880922  907909 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0809 18:57:49.880932  907909 command_runner.go:130] > # storage_option = [
	I0809 18:57:49.880938  907909 command_runner.go:130] > # ]
	I0809 18:57:49.880948  907909 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0809 18:57:49.880963  907909 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0809 18:57:49.880970  907909 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0809 18:57:49.880980  907909 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0809 18:57:49.880990  907909 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0809 18:57:49.880998  907909 command_runner.go:130] > # always happen on a node reboot
	I0809 18:57:49.881006  907909 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0809 18:57:49.881016  907909 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0809 18:57:49.881026  907909 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0809 18:57:49.881042  907909 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0809 18:57:49.881054  907909 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0809 18:57:49.881065  907909 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0809 18:57:49.881077  907909 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0809 18:57:49.881083  907909 command_runner.go:130] > # internal_wipe = true
	I0809 18:57:49.881090  907909 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0809 18:57:49.881100  907909 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0809 18:57:49.881108  907909 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0809 18:57:49.881116  907909 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0809 18:57:49.881130  907909 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0809 18:57:49.881137  907909 command_runner.go:130] > [crio.api]
	I0809 18:57:49.881145  907909 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0809 18:57:49.881152  907909 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0809 18:57:49.881160  907909 command_runner.go:130] > # IP address on which the stream server will listen.
	I0809 18:57:49.881168  907909 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0809 18:57:49.881179  907909 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0809 18:57:49.881188  907909 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0809 18:57:49.881201  907909 command_runner.go:130] > # stream_port = "0"
	I0809 18:57:49.881210  907909 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0809 18:57:49.881221  907909 command_runner.go:130] > # stream_enable_tls = false
	I0809 18:57:49.881232  907909 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0809 18:57:49.881239  907909 command_runner.go:130] > # stream_idle_timeout = ""
	I0809 18:57:49.881250  907909 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0809 18:57:49.881260  907909 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0809 18:57:49.881266  907909 command_runner.go:130] > # minutes.
	I0809 18:57:49.881272  907909 command_runner.go:130] > # stream_tls_cert = ""
	I0809 18:57:49.881282  907909 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0809 18:57:49.881293  907909 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0809 18:57:49.881299  907909 command_runner.go:130] > # stream_tls_key = ""
	I0809 18:57:49.881312  907909 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0809 18:57:49.881322  907909 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0809 18:57:49.881331  907909 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0809 18:57:49.881371  907909 command_runner.go:130] > # stream_tls_ca = ""
	I0809 18:57:49.881384  907909 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0809 18:57:49.881391  907909 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0809 18:57:49.881399  907909 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0809 18:57:49.881403  907909 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0809 18:57:49.881829  907909 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0809 18:57:49.881856  907909 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0809 18:57:49.881882  907909 command_runner.go:130] > [crio.runtime]
	I0809 18:57:49.881909  907909 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0809 18:57:49.881923  907909 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0809 18:57:49.881936  907909 command_runner.go:130] > # "nofile=1024:2048"
	I0809 18:57:49.881961  907909 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0809 18:57:49.881971  907909 command_runner.go:130] > # default_ulimits = [
	I0809 18:57:49.881980  907909 command_runner.go:130] > # ]
	I0809 18:57:49.881996  907909 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0809 18:57:49.882014  907909 command_runner.go:130] > # no_pivot = false
	I0809 18:57:49.882026  907909 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0809 18:57:49.882038  907909 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0809 18:57:49.882052  907909 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0809 18:57:49.882063  907909 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0809 18:57:49.882071  907909 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0809 18:57:49.882081  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0809 18:57:49.882085  907909 command_runner.go:130] > # conmon = ""
	I0809 18:57:49.882092  907909 command_runner.go:130] > # Cgroup setting for conmon
	I0809 18:57:49.882102  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0809 18:57:49.882107  907909 command_runner.go:130] > conmon_cgroup = "pod"
	I0809 18:57:49.882113  907909 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0809 18:57:49.882121  907909 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0809 18:57:49.882127  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0809 18:57:49.882131  907909 command_runner.go:130] > # conmon_env = [
	I0809 18:57:49.882135  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882142  907909 command_runner.go:130] > # Additional environment variables to set for all the
	I0809 18:57:49.882148  907909 command_runner.go:130] > # containers. These are overridden if set in the
	I0809 18:57:49.882154  907909 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0809 18:57:49.882160  907909 command_runner.go:130] > # default_env = [
	I0809 18:57:49.882164  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882169  907909 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0809 18:57:49.882179  907909 command_runner.go:130] > # selinux = false
	I0809 18:57:49.882188  907909 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0809 18:57:49.882195  907909 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0809 18:57:49.882200  907909 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0809 18:57:49.882210  907909 command_runner.go:130] > # seccomp_profile = ""
	I0809 18:57:49.882216  907909 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0809 18:57:49.882222  907909 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0809 18:57:49.882230  907909 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0809 18:57:49.882237  907909 command_runner.go:130] > # which might increase security.
	I0809 18:57:49.882242  907909 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0809 18:57:49.882251  907909 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0809 18:57:49.882257  907909 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0809 18:57:49.882265  907909 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0809 18:57:49.882272  907909 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0809 18:57:49.882277  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:57:49.882284  907909 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0809 18:57:49.882291  907909 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0809 18:57:49.882296  907909 command_runner.go:130] > # the cgroup blockio controller.
	I0809 18:57:49.882300  907909 command_runner.go:130] > # blockio_config_file = ""
	I0809 18:57:49.882309  907909 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0809 18:57:49.882313  907909 command_runner.go:130] > # irqbalance daemon.
	I0809 18:57:49.882318  907909 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0809 18:57:49.882329  907909 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0809 18:57:49.882334  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:57:49.882338  907909 command_runner.go:130] > # rdt_config_file = ""
	I0809 18:57:49.882346  907909 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0809 18:57:49.882350  907909 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0809 18:57:49.882356  907909 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0809 18:57:49.882363  907909 command_runner.go:130] > # separate_pull_cgroup = ""
	I0809 18:57:49.882369  907909 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0809 18:57:49.882375  907909 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0809 18:57:49.882382  907909 command_runner.go:130] > # will be added.
	I0809 18:57:49.882386  907909 command_runner.go:130] > # default_capabilities = [
	I0809 18:57:49.882390  907909 command_runner.go:130] > # 	"CHOWN",
	I0809 18:57:49.882393  907909 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0809 18:57:49.882397  907909 command_runner.go:130] > # 	"FSETID",
	I0809 18:57:49.882401  907909 command_runner.go:130] > # 	"FOWNER",
	I0809 18:57:49.882407  907909 command_runner.go:130] > # 	"SETGID",
	I0809 18:57:49.882411  907909 command_runner.go:130] > # 	"SETUID",
	I0809 18:57:49.882415  907909 command_runner.go:130] > # 	"SETPCAP",
	I0809 18:57:49.882422  907909 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0809 18:57:49.882425  907909 command_runner.go:130] > # 	"KILL",
	I0809 18:57:49.882429  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882439  907909 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0809 18:57:49.882445  907909 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0809 18:57:49.882453  907909 command_runner.go:130] > # add_inheritable_capabilities = true
	I0809 18:57:49.882460  907909 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0809 18:57:49.882582  907909 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0809 18:57:49.882589  907909 command_runner.go:130] > # default_sysctls = [
	I0809 18:57:49.882593  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882597  907909 command_runner.go:130] > # List of devices on the host that a
	I0809 18:57:49.882608  907909 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0809 18:57:49.882612  907909 command_runner.go:130] > # allowed_devices = [
	I0809 18:57:49.882616  907909 command_runner.go:130] > # 	"/dev/fuse",
	I0809 18:57:49.882619  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882627  907909 command_runner.go:130] > # List of additional devices. specified as
	I0809 18:57:49.882698  907909 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0809 18:57:49.882710  907909 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0809 18:57:49.882723  907909 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0809 18:57:49.882730  907909 command_runner.go:130] > # additional_devices = [
	I0809 18:57:49.882734  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882739  907909 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0809 18:57:49.882745  907909 command_runner.go:130] > # cdi_spec_dirs = [
	I0809 18:57:49.882749  907909 command_runner.go:130] > # 	"/etc/cdi",
	I0809 18:57:49.882752  907909 command_runner.go:130] > # 	"/var/run/cdi",
	I0809 18:57:49.882756  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882765  907909 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0809 18:57:49.882771  907909 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0809 18:57:49.882775  907909 command_runner.go:130] > # Defaults to false.
	I0809 18:57:49.882783  907909 command_runner.go:130] > # device_ownership_from_security_context = false
	I0809 18:57:49.882789  907909 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0809 18:57:49.882795  907909 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0809 18:57:49.882801  907909 command_runner.go:130] > # hooks_dir = [
	I0809 18:57:49.882806  907909 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0809 18:57:49.882809  907909 command_runner.go:130] > # ]
	I0809 18:57:49.882815  907909 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0809 18:57:49.882828  907909 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0809 18:57:49.882833  907909 command_runner.go:130] > # its default mounts from the following two files:
	I0809 18:57:49.882836  907909 command_runner.go:130] > #
	I0809 18:57:49.882845  907909 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0809 18:57:49.882851  907909 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0809 18:57:49.882857  907909 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0809 18:57:49.882860  907909 command_runner.go:130] > #
	I0809 18:57:49.882869  907909 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0809 18:57:49.882875  907909 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0809 18:57:49.882884  907909 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0809 18:57:49.882889  907909 command_runner.go:130] > #      only add mounts it finds in this file.
	I0809 18:57:49.882892  907909 command_runner.go:130] > #
	I0809 18:57:49.882897  907909 command_runner.go:130] > # default_mounts_file = ""
	I0809 18:57:49.882907  907909 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0809 18:57:49.882913  907909 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0809 18:57:49.882917  907909 command_runner.go:130] > # pids_limit = 0
	I0809 18:57:49.882974  907909 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0809 18:57:49.882996  907909 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0809 18:57:49.883016  907909 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0809 18:57:49.883031  907909 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0809 18:57:49.883039  907909 command_runner.go:130] > # log_size_max = -1
	I0809 18:57:49.883056  907909 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0809 18:57:49.883065  907909 command_runner.go:130] > # log_to_journald = false
	I0809 18:57:49.883074  907909 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0809 18:57:49.883082  907909 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0809 18:57:49.883089  907909 command_runner.go:130] > # Path to directory for container attach sockets.
	I0809 18:57:49.883094  907909 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0809 18:57:49.883102  907909 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0809 18:57:49.883107  907909 command_runner.go:130] > # bind_mount_prefix = ""
	I0809 18:57:49.883113  907909 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0809 18:57:49.883117  907909 command_runner.go:130] > # read_only = false
	I0809 18:57:49.883125  907909 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0809 18:57:49.883131  907909 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0809 18:57:49.883138  907909 command_runner.go:130] > # live configuration reload.
	I0809 18:57:49.883142  907909 command_runner.go:130] > # log_level = "info"
	I0809 18:57:49.883148  907909 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0809 18:57:49.883153  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:57:49.883159  907909 command_runner.go:130] > # log_filter = ""
	I0809 18:57:49.883165  907909 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0809 18:57:49.883171  907909 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0809 18:57:49.883177  907909 command_runner.go:130] > # separated by comma.
	I0809 18:57:49.883184  907909 command_runner.go:130] > # uid_mappings = ""
	I0809 18:57:49.883189  907909 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0809 18:57:49.883198  907909 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0809 18:57:49.883202  907909 command_runner.go:130] > # separated by comma.
	I0809 18:57:49.883205  907909 command_runner.go:130] > # gid_mappings = ""
	I0809 18:57:49.883211  907909 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0809 18:57:49.883220  907909 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0809 18:57:49.883226  907909 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0809 18:57:49.883230  907909 command_runner.go:130] > # minimum_mappable_uid = -1
	I0809 18:57:49.883240  907909 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0809 18:57:49.883252  907909 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0809 18:57:49.883261  907909 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0809 18:57:49.883265  907909 command_runner.go:130] > # minimum_mappable_gid = -1
	I0809 18:57:49.883306  907909 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0809 18:57:49.883316  907909 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0809 18:57:49.883322  907909 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0809 18:57:49.883326  907909 command_runner.go:130] > # ctr_stop_timeout = 30
	I0809 18:57:49.883334  907909 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0809 18:57:49.883344  907909 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0809 18:57:49.883351  907909 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0809 18:57:49.883359  907909 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0809 18:57:49.883364  907909 command_runner.go:130] > # drop_infra_ctr = true
	I0809 18:57:49.883370  907909 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0809 18:57:49.883378  907909 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0809 18:57:49.883385  907909 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0809 18:57:49.883389  907909 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0809 18:57:49.883398  907909 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0809 18:57:49.883403  907909 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0809 18:57:49.883407  907909 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0809 18:57:49.883416  907909 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0809 18:57:49.883420  907909 command_runner.go:130] > # pinns_path = ""
	I0809 18:57:49.883426  907909 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0809 18:57:49.883435  907909 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0809 18:57:49.883441  907909 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0809 18:57:49.883445  907909 command_runner.go:130] > # default_runtime = "runc"
	I0809 18:57:49.883453  907909 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0809 18:57:49.883477  907909 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0809 18:57:49.883487  907909 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0809 18:57:49.883492  907909 command_runner.go:130] > # creation as a file is not desired either.
	I0809 18:57:49.883503  907909 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0809 18:57:49.883508  907909 command_runner.go:130] > # the hostname is being managed dynamically.
	I0809 18:57:49.883799  907909 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0809 18:57:49.883817  907909 command_runner.go:130] > # ]
	I0809 18:57:49.883828  907909 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0809 18:57:49.883839  907909 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0809 18:57:49.883853  907909 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0809 18:57:49.883867  907909 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0809 18:57:49.883875  907909 command_runner.go:130] > #
	I0809 18:57:49.883886  907909 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0809 18:57:49.883897  907909 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0809 18:57:49.883908  907909 command_runner.go:130] > #  runtime_type = "oci"
	I0809 18:57:49.883918  907909 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0809 18:57:49.883929  907909 command_runner.go:130] > #  privileged_without_host_devices = false
	I0809 18:57:49.883940  907909 command_runner.go:130] > #  allowed_annotations = []
	I0809 18:57:49.883947  907909 command_runner.go:130] > # Where:
	I0809 18:57:49.883960  907909 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0809 18:57:49.883982  907909 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0809 18:57:49.883996  907909 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0809 18:57:49.884009  907909 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0809 18:57:49.884018  907909 command_runner.go:130] > #   in $PATH.
	I0809 18:57:49.884031  907909 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0809 18:57:49.884042  907909 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0809 18:57:49.884055  907909 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0809 18:57:49.884064  907909 command_runner.go:130] > #   state.
	I0809 18:57:49.884077  907909 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0809 18:57:49.884090  907909 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0809 18:57:49.884103  907909 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0809 18:57:49.884115  907909 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0809 18:57:49.884128  907909 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0809 18:57:49.884142  907909 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0809 18:57:49.884153  907909 command_runner.go:130] > #   The currently recognized values are:
	I0809 18:57:49.884167  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0809 18:57:49.884183  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0809 18:57:49.884195  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0809 18:57:49.884207  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0809 18:57:49.884221  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0809 18:57:49.884234  907909 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0809 18:57:49.884246  907909 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0809 18:57:49.884259  907909 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0809 18:57:49.884308  907909 command_runner.go:130] > #   should be moved to the container's cgroup
	I0809 18:57:49.884317  907909 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0809 18:57:49.884325  907909 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0809 18:57:49.884334  907909 command_runner.go:130] > runtime_type = "oci"
	I0809 18:57:49.884343  907909 command_runner.go:130] > runtime_root = "/run/runc"
	I0809 18:57:49.884349  907909 command_runner.go:130] > runtime_config_path = ""
	I0809 18:57:49.884358  907909 command_runner.go:130] > monitor_path = ""
	I0809 18:57:49.884364  907909 command_runner.go:130] > monitor_cgroup = ""
	I0809 18:57:49.884373  907909 command_runner.go:130] > monitor_exec_cgroup = ""
	I0809 18:57:49.884401  907909 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0809 18:57:49.884412  907909 command_runner.go:130] > # running containers
	I0809 18:57:49.884418  907909 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0809 18:57:49.884437  907909 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0809 18:57:49.884452  907909 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0809 18:57:49.884463  907909 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0809 18:57:49.884474  907909 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0809 18:57:49.884483  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0809 18:57:49.884492  907909 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0809 18:57:49.884501  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0809 18:57:49.884509  907909 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0809 18:57:49.884519  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0809 18:57:49.884531  907909 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0809 18:57:49.884543  907909 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0809 18:57:49.884555  907909 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0809 18:57:49.884569  907909 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0809 18:57:49.884583  907909 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0809 18:57:49.884594  907909 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0809 18:57:49.884608  907909 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0809 18:57:49.884623  907909 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0809 18:57:49.884634  907909 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0809 18:57:49.884649  907909 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0809 18:57:49.884658  907909 command_runner.go:130] > # Example:
	I0809 18:57:49.884668  907909 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0809 18:57:49.884679  907909 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0809 18:57:49.884686  907909 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0809 18:57:49.884704  907909 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0809 18:57:49.884713  907909 command_runner.go:130] > # cpuset = 0
	I0809 18:57:49.884721  907909 command_runner.go:130] > # cpushares = "0-1"
	I0809 18:57:49.884730  907909 command_runner.go:130] > # Where:
	I0809 18:57:49.884740  907909 command_runner.go:130] > # The workload name is workload-type.
	I0809 18:57:49.884755  907909 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0809 18:57:49.884767  907909 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0809 18:57:49.884779  907909 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0809 18:57:49.884796  907909 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0809 18:57:49.884810  907909 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0809 18:57:49.884819  907909 command_runner.go:130] > # 
	I0809 18:57:49.884834  907909 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0809 18:57:49.884842  907909 command_runner.go:130] > #
	I0809 18:57:49.884857  907909 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0809 18:57:49.884870  907909 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0809 18:57:49.884883  907909 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0809 18:57:49.884897  907909 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0809 18:57:49.884910  907909 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0809 18:57:49.884918  907909 command_runner.go:130] > [crio.image]
	I0809 18:57:49.884930  907909 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0809 18:57:49.884939  907909 command_runner.go:130] > # default_transport = "docker://"
	I0809 18:57:49.884949  907909 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0809 18:57:49.884962  907909 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0809 18:57:49.884971  907909 command_runner.go:130] > # global_auth_file = ""
	I0809 18:57:49.884983  907909 command_runner.go:130] > # The image used to instantiate infra containers.
	I0809 18:57:49.884994  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:57:49.885004  907909 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0809 18:57:49.885017  907909 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0809 18:57:49.885030  907909 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0809 18:57:49.885040  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:57:49.885050  907909 command_runner.go:130] > # pause_image_auth_file = ""
	I0809 18:57:49.885063  907909 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0809 18:57:49.885076  907909 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0809 18:57:49.885089  907909 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0809 18:57:49.885102  907909 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0809 18:57:49.885112  907909 command_runner.go:130] > # pause_command = "/pause"
	I0809 18:57:49.885122  907909 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0809 18:57:49.885166  907909 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0809 18:57:49.885179  907909 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0809 18:57:49.885188  907909 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0809 18:57:49.885199  907909 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0809 18:57:49.885206  907909 command_runner.go:130] > # signature_policy = ""
	I0809 18:57:49.885218  907909 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0809 18:57:49.885231  907909 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0809 18:57:49.885241  907909 command_runner.go:130] > # changing them here.
	I0809 18:57:49.885253  907909 command_runner.go:130] > # insecure_registries = [
	I0809 18:57:49.885261  907909 command_runner.go:130] > # ]
	I0809 18:57:49.885273  907909 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0809 18:57:49.885285  907909 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0809 18:57:49.885299  907909 command_runner.go:130] > # image_volumes = "mkdir"
	I0809 18:57:49.885311  907909 command_runner.go:130] > # Temporary directory to use for storing big files
	I0809 18:57:49.885322  907909 command_runner.go:130] > # big_files_temporary_dir = ""
	I0809 18:57:49.885336  907909 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0809 18:57:49.885345  907909 command_runner.go:130] > # CNI plugins.
	I0809 18:57:49.885354  907909 command_runner.go:130] > [crio.network]
	I0809 18:57:49.885365  907909 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0809 18:57:49.885377  907909 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0809 18:57:49.885385  907909 command_runner.go:130] > # cni_default_network = ""
	I0809 18:57:49.885396  907909 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0809 18:57:49.885406  907909 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0809 18:57:49.885417  907909 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0809 18:57:49.885426  907909 command_runner.go:130] > # plugin_dirs = [
	I0809 18:57:49.885432  907909 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0809 18:57:49.885440  907909 command_runner.go:130] > # ]
	I0809 18:57:49.885450  907909 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0809 18:57:49.885460  907909 command_runner.go:130] > [crio.metrics]
	I0809 18:57:49.885469  907909 command_runner.go:130] > # Globally enable or disable metrics support.
	I0809 18:57:49.885479  907909 command_runner.go:130] > # enable_metrics = false
	I0809 18:57:49.885489  907909 command_runner.go:130] > # Specify enabled metrics collectors.
	I0809 18:57:49.885499  907909 command_runner.go:130] > # Per default all metrics are enabled.
	I0809 18:57:49.885508  907909 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0809 18:57:49.885516  907909 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0809 18:57:49.885522  907909 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0809 18:57:49.885528  907909 command_runner.go:130] > # metrics_collectors = [
	I0809 18:57:49.885532  907909 command_runner.go:130] > # 	"operations",
	I0809 18:57:49.885538  907909 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0809 18:57:49.885543  907909 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0809 18:57:49.885549  907909 command_runner.go:130] > # 	"operations_errors",
	I0809 18:57:49.885553  907909 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0809 18:57:49.885560  907909 command_runner.go:130] > # 	"image_pulls_by_name",
	I0809 18:57:49.885565  907909 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0809 18:57:49.885571  907909 command_runner.go:130] > # 	"image_pulls_failures",
	I0809 18:57:49.885575  907909 command_runner.go:130] > # 	"image_pulls_successes",
	I0809 18:57:49.885581  907909 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0809 18:57:49.885587  907909 command_runner.go:130] > # 	"image_layer_reuse",
	I0809 18:57:49.885591  907909 command_runner.go:130] > # 	"containers_oom_total",
	I0809 18:57:49.885597  907909 command_runner.go:130] > # 	"containers_oom",
	I0809 18:57:49.885602  907909 command_runner.go:130] > # 	"processes_defunct",
	I0809 18:57:49.885608  907909 command_runner.go:130] > # 	"operations_total",
	I0809 18:57:49.885612  907909 command_runner.go:130] > # 	"operations_latency_seconds",
	I0809 18:57:49.885619  907909 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0809 18:57:49.885626  907909 command_runner.go:130] > # 	"operations_errors_total",
	I0809 18:57:49.885636  907909 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0809 18:57:49.885643  907909 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0809 18:57:49.885658  907909 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0809 18:57:49.885669  907909 command_runner.go:130] > # 	"image_pulls_success_total",
	I0809 18:57:49.885678  907909 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0809 18:57:49.885687  907909 command_runner.go:130] > # 	"containers_oom_count_total",
	I0809 18:57:49.885700  907909 command_runner.go:130] > # ]
	I0809 18:57:49.885712  907909 command_runner.go:130] > # The port on which the metrics server will listen.
	I0809 18:57:49.885718  907909 command_runner.go:130] > # metrics_port = 9090
	I0809 18:57:49.885723  907909 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0809 18:57:49.885729  907909 command_runner.go:130] > # metrics_socket = ""
	I0809 18:57:49.885733  907909 command_runner.go:130] > # The certificate for the secure metrics server.
	I0809 18:57:49.885739  907909 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0809 18:57:49.885748  907909 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0809 18:57:49.885753  907909 command_runner.go:130] > # certificate on any modification event.
	I0809 18:57:49.885757  907909 command_runner.go:130] > # metrics_cert = ""
	I0809 18:57:49.885763  907909 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0809 18:57:49.885778  907909 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0809 18:57:49.885784  907909 command_runner.go:130] > # metrics_key = ""
	I0809 18:57:49.885791  907909 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0809 18:57:49.885797  907909 command_runner.go:130] > [crio.tracing]
	I0809 18:57:49.885803  907909 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0809 18:57:49.885809  907909 command_runner.go:130] > # enable_tracing = false
	I0809 18:57:49.885814  907909 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0809 18:57:49.885821  907909 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0809 18:57:49.885826  907909 command_runner.go:130] > # Number of samples to collect per million spans.
	I0809 18:57:49.885833  907909 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0809 18:57:49.885839  907909 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0809 18:57:49.885845  907909 command_runner.go:130] > [crio.stats]
	I0809 18:57:49.885851  907909 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0809 18:57:49.885858  907909 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0809 18:57:49.885864  907909 command_runner.go:130] > # stats_collection_period = 0
	I0809 18:57:49.885891  907909 command_runner.go:130] ! time="2023-08-09 18:57:49.878277094Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0809 18:57:49.885904  907909 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0809 18:57:49.885981  907909 cni.go:84] Creating CNI manager for ""
	I0809 18:57:49.885995  907909 cni.go:136] 1 nodes found, recommending kindnet
	I0809 18:57:49.886007  907909 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 18:57:49.886026  907909 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-814696 NodeName:multinode-814696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 18:57:49.886149  907909 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-814696"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 18:57:49.886211  907909 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-814696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 18:57:49.886259  907909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 18:57:49.893719  907909 command_runner.go:130] > kubeadm
	I0809 18:57:49.893732  907909 command_runner.go:130] > kubectl
	I0809 18:57:49.893735  907909 command_runner.go:130] > kubelet
	I0809 18:57:49.894372  907909 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 18:57:49.894443  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 18:57:49.902117  907909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0809 18:57:49.918504  907909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 18:57:49.934216  907909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0809 18:57:49.950173  907909 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0809 18:57:49.953351  907909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:57:49.962979  907909 certs.go:56] Setting up /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696 for IP: 192.168.58.2
	I0809 18:57:49.963015  907909 certs.go:190] acquiring lock for shared ca certs: {Name:mk19b72d6df3cc07014c8108931f9946a7850469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:49.963152  907909 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key
	I0809 18:57:49.963188  907909 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key
	I0809 18:57:49.963232  907909 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key
	I0809 18:57:49.963245  907909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt with IP's: []
	I0809 18:57:50.160280  907909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt ...
	I0809 18:57:50.160314  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt: {Name:mkfb9a604aa047fd82441c37eceed1d69240f208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.160478  907909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key ...
	I0809 18:57:50.160488  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key: {Name:mke718be5b754ec545af31852ac2042d494ede2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.160558  907909 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key.cee25041
	I0809 18:57:50.160577  907909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0809 18:57:50.223578  907909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt.cee25041 ...
	I0809 18:57:50.223611  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt.cee25041: {Name:mk74cd325bab9a087ad97da1fe466c48f730c606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.223787  907909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key.cee25041 ...
	I0809 18:57:50.223798  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key.cee25041: {Name:mk62af7107f4c329cb47effebb4935719a98d029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.223866  907909 certs.go:337] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt
	I0809 18:57:50.223938  907909 certs.go:341] copying /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key
	I0809 18:57:50.223985  907909 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.key
	I0809 18:57:50.223997  907909 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.crt with IP's: []
	I0809 18:57:50.559538  907909 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.crt ...
	I0809 18:57:50.559574  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.crt: {Name:mk97326040b23a11f8956a1301c9345a9e8b887d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.559757  907909 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.key ...
	I0809 18:57:50.559769  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.key: {Name:mkb7cca2642089f638f74c2a5859051974a1535f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:57:50.559845  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0809 18:57:50.559862  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0809 18:57:50.559875  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0809 18:57:50.559891  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0809 18:57:50.559903  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0809 18:57:50.559916  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0809 18:57:50.559928  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0809 18:57:50.559939  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0809 18:57:50.559993  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem (1338 bytes)
	W0809 18:57:50.560028  907909 certs.go:433] ignoring /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434_empty.pem, impossibly tiny 0 bytes
	I0809 18:57:50.560041  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem (1675 bytes)
	I0809 18:57:50.560068  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem (1082 bytes)
	I0809 18:57:50.560095  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem (1123 bytes)
	I0809 18:57:50.560124  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem (1679 bytes)
	I0809 18:57:50.560161  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:57:50.560186  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:57:50.560201  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem -> /usr/share/ca-certificates/823434.pem
	I0809 18:57:50.560213  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /usr/share/ca-certificates/8234342.pem
	I0809 18:57:50.562024  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 18:57:50.583866  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0809 18:57:50.604626  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 18:57:50.625609  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0809 18:57:50.646903  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 18:57:50.667655  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0809 18:57:50.688492  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 18:57:50.709579  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0809 18:57:50.730769  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 18:57:50.752104  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem --> /usr/share/ca-certificates/823434.pem (1338 bytes)
	I0809 18:57:50.773025  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /usr/share/ca-certificates/8234342.pem (1708 bytes)
	I0809 18:57:50.794324  907909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 18:57:50.810151  907909 ssh_runner.go:195] Run: openssl version
	I0809 18:57:50.814985  907909 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0809 18:57:50.815148  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/823434.pem && ln -fs /usr/share/ca-certificates/823434.pem /etc/ssl/certs/823434.pem"
	I0809 18:57:50.823500  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/823434.pem
	I0809 18:57:50.826580  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 18:57:50.826605  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 18:57:50.826641  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/823434.pem
	I0809 18:57:50.832678  907909 command_runner.go:130] > 51391683
	I0809 18:57:50.832874  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/823434.pem /etc/ssl/certs/51391683.0"
	I0809 18:57:50.840945  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8234342.pem && ln -fs /usr/share/ca-certificates/8234342.pem /etc/ssl/certs/8234342.pem"
	I0809 18:57:50.849390  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8234342.pem
	I0809 18:57:50.852441  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 18:57:50.852470  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 18:57:50.852511  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8234342.pem
	I0809 18:57:50.858537  907909 command_runner.go:130] > 3ec20f2e
	I0809 18:57:50.858721  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8234342.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 18:57:50.867119  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 18:57:50.875342  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:57:50.878233  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:57:50.878269  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:57:50.878304  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:57:50.884445  907909 command_runner.go:130] > b5213941
	I0809 18:57:50.884619  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 18:57:50.892708  907909 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 18:57:50.895531  907909 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:57:50.895597  907909 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:57:50.895664  907909 kubeadm.go:404] StartCluster: {Name:multinode-814696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:57:50.895764  907909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0809 18:57:50.895802  907909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0809 18:57:50.928803  907909 cri.go:89] found id: ""
	I0809 18:57:50.928874  907909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 18:57:50.937027  907909 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0809 18:57:50.937048  907909 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0809 18:57:50.937054  907909 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0809 18:57:50.937119  907909 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 18:57:50.944789  907909 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0809 18:57:50.944842  907909 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 18:57:50.952550  907909 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0809 18:57:50.952572  907909 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0809 18:57:50.952579  907909 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0809 18:57:50.952586  907909 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 18:57:50.952617  907909 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0809 18:57:50.952690  907909 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0809 18:57:50.996502  907909 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0809 18:57:50.996534  907909 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0809 18:57:50.996589  907909 kubeadm.go:322] [preflight] Running pre-flight checks
	I0809 18:57:50.996600  907909 command_runner.go:130] > [preflight] Running pre-flight checks
	I0809 18:57:51.031630  907909 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0809 18:57:51.031680  907909 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0809 18:57:51.031791  907909 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0809 18:57:51.031806  907909 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1038-gcp
	I0809 18:57:51.031852  907909 kubeadm.go:322] OS: Linux
	I0809 18:57:51.031867  907909 command_runner.go:130] > OS: Linux
	I0809 18:57:51.031932  907909 kubeadm.go:322] CGROUPS_CPU: enabled
	I0809 18:57:51.031940  907909 command_runner.go:130] > CGROUPS_CPU: enabled
	I0809 18:57:51.031985  907909 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0809 18:57:51.031992  907909 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0809 18:57:51.032036  907909 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0809 18:57:51.032043  907909 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0809 18:57:51.032081  907909 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0809 18:57:51.032087  907909 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0809 18:57:51.032126  907909 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0809 18:57:51.032135  907909 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0809 18:57:51.032190  907909 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0809 18:57:51.032198  907909 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0809 18:57:51.032244  907909 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0809 18:57:51.032258  907909 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0809 18:57:51.032294  907909 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0809 18:57:51.032303  907909 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0809 18:57:51.032340  907909 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0809 18:57:51.032347  907909 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0809 18:57:51.094349  907909 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 18:57:51.094361  907909 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0809 18:57:51.094497  907909 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 18:57:51.094512  907909 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0809 18:57:51.094639  907909 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 18:57:51.094651  907909 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0809 18:57:51.289695  907909 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 18:57:51.289728  907909 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0809 18:57:51.293327  907909 out.go:204]   - Generating certificates and keys ...
	I0809 18:57:51.293448  907909 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0809 18:57:51.293473  907909 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0809 18:57:51.293593  907909 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0809 18:57:51.293609  907909 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0809 18:57:51.350117  907909 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 18:57:51.350150  907909 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0809 18:57:51.545948  907909 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0809 18:57:51.545976  907909 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0809 18:57:51.617857  907909 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0809 18:57:51.617885  907909 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0809 18:57:51.714429  907909 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0809 18:57:51.714465  907909 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0809 18:57:51.837571  907909 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0809 18:57:51.837608  907909 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0809 18:57:51.837741  907909 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-814696] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0809 18:57:51.837773  907909 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-814696] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0809 18:57:52.051035  907909 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0809 18:57:52.051071  907909 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0809 18:57:52.051212  907909 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-814696] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0809 18:57:52.051285  907909 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-814696] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0809 18:57:52.320600  907909 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 18:57:52.320627  907909 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0809 18:57:52.641337  907909 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 18:57:52.641365  907909 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0809 18:57:52.827377  907909 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0809 18:57:52.827409  907909 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0809 18:57:52.827499  907909 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 18:57:52.827510  907909 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0809 18:57:52.931203  907909 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 18:57:52.931234  907909 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0809 18:57:53.043497  907909 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 18:57:53.043526  907909 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0809 18:57:53.312291  907909 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 18:57:53.312323  907909 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0809 18:57:53.649865  907909 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 18:57:53.649895  907909 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0809 18:57:53.657921  907909 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 18:57:53.657946  907909 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 18:57:53.658654  907909 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 18:57:53.658686  907909 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 18:57:53.658749  907909 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0809 18:57:53.658763  907909 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0809 18:57:53.732097  907909 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 18:57:53.735316  907909 out.go:204]   - Booting up control plane ...
	I0809 18:57:53.732155  907909 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0809 18:57:53.735467  907909 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 18:57:53.735472  907909 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0809 18:57:53.735978  907909 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 18:57:53.736001  907909 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0809 18:57:53.736927  907909 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 18:57:53.736943  907909 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0809 18:57:53.737794  907909 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 18:57:53.737812  907909 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0809 18:57:53.740398  907909 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 18:57:53.740415  907909 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0809 18:57:59.243237  907909 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502782 seconds
	I0809 18:57:59.243271  907909 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.502782 seconds
	I0809 18:57:59.243403  907909 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 18:57:59.243419  907909 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0809 18:57:59.255922  907909 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 18:57:59.255955  907909 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0809 18:57:59.775782  907909 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0809 18:57:59.775808  907909 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0809 18:57:59.775951  907909 kubeadm.go:322] [mark-control-plane] Marking the node multinode-814696 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0809 18:57:59.775959  907909 command_runner.go:130] > [mark-control-plane] Marking the node multinode-814696 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0809 18:58:00.284978  907909 kubeadm.go:322] [bootstrap-token] Using token: gnhp33.5u6g1q27ft1wr2pa
	I0809 18:58:00.286571  907909 out.go:204]   - Configuring RBAC rules ...
	I0809 18:58:00.285044  907909 command_runner.go:130] > [bootstrap-token] Using token: gnhp33.5u6g1q27ft1wr2pa
	I0809 18:58:00.286721  907909 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 18:58:00.286737  907909 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0809 18:58:00.290439  907909 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 18:58:00.290473  907909 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0809 18:58:00.296527  907909 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 18:58:00.296548  907909 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0809 18:58:00.299295  907909 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 18:58:00.299316  907909 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0809 18:58:00.303085  907909 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 18:58:00.303117  907909 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0809 18:58:00.305829  907909 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 18:58:00.305847  907909 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0809 18:58:00.315520  907909 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 18:58:00.315540  907909 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0809 18:58:00.521603  907909 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0809 18:58:00.521631  907909 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0809 18:58:00.695437  907909 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0809 18:58:00.695460  907909 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0809 18:58:00.696823  907909 kubeadm.go:322] 
	I0809 18:58:00.696930  907909 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0809 18:58:00.696944  907909 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0809 18:58:00.696951  907909 kubeadm.go:322] 
	I0809 18:58:00.697028  907909 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0809 18:58:00.697036  907909 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0809 18:58:00.697040  907909 kubeadm.go:322] 
	I0809 18:58:00.697074  907909 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0809 18:58:00.697084  907909 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0809 18:58:00.697138  907909 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 18:58:00.697148  907909 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0809 18:58:00.697213  907909 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 18:58:00.697229  907909 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0809 18:58:00.697238  907909 kubeadm.go:322] 
	I0809 18:58:00.697310  907909 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0809 18:58:00.697321  907909 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0809 18:58:00.697325  907909 kubeadm.go:322] 
	I0809 18:58:00.697389  907909 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0809 18:58:00.697399  907909 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0809 18:58:00.697404  907909 kubeadm.go:322] 
	I0809 18:58:00.697472  907909 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0809 18:58:00.697481  907909 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0809 18:58:00.697578  907909 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 18:58:00.697588  907909 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0809 18:58:00.697680  907909 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 18:58:00.697689  907909 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0809 18:58:00.697695  907909 kubeadm.go:322] 
	I0809 18:58:00.697804  907909 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0809 18:58:00.697815  907909 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0809 18:58:00.697913  907909 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0809 18:58:00.697926  907909 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0809 18:58:00.697931  907909 kubeadm.go:322] 
	I0809 18:58:00.698037  907909 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gnhp33.5u6g1q27ft1wr2pa \
	I0809 18:58:00.698047  907909 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token gnhp33.5u6g1q27ft1wr2pa \
	I0809 18:58:00.698179  907909 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c \
	I0809 18:58:00.698189  907909 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c \
	I0809 18:58:00.698217  907909 kubeadm.go:322] 	--control-plane 
	I0809 18:58:00.698227  907909 command_runner.go:130] > 	--control-plane 
	I0809 18:58:00.698232  907909 kubeadm.go:322] 
	I0809 18:58:00.698342  907909 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0809 18:58:00.698351  907909 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0809 18:58:00.698355  907909 kubeadm.go:322] 
	I0809 18:58:00.698459  907909 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gnhp33.5u6g1q27ft1wr2pa \
	I0809 18:58:00.698471  907909 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gnhp33.5u6g1q27ft1wr2pa \
	I0809 18:58:00.698607  907909 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c 
	I0809 18:58:00.698618  907909 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c 
	I0809 18:58:00.701144  907909 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0809 18:58:00.701176  907909 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0809 18:58:00.701319  907909 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 18:58:00.701333  907909 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 18:58:00.701349  907909 cni.go:84] Creating CNI manager for ""
	I0809 18:58:00.701363  907909 cni.go:136] 1 nodes found, recommending kindnet
	I0809 18:58:00.703911  907909 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0809 18:58:00.705219  907909 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0809 18:58:00.756245  907909 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0809 18:58:00.756276  907909 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0809 18:58:00.756286  907909 command_runner.go:130] > Device: 37h/55d	Inode: 800976      Links: 1
	I0809 18:58:00.756295  907909 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:58:00.756303  907909 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0809 18:58:00.756309  907909 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0809 18:58:00.756317  907909 command_runner.go:130] > Change: 2023-08-09 18:39:27.249115629 +0000
	I0809 18:58:00.756324  907909 command_runner.go:130] >  Birth: 2023-08-09 18:39:27.225113304 +0000
	I0809 18:58:00.756387  907909 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0809 18:58:00.756403  907909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0809 18:58:00.775075  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0809 18:58:01.497797  907909 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0809 18:58:01.502567  907909 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0809 18:58:01.509202  907909 command_runner.go:130] > serviceaccount/kindnet created
	I0809 18:58:01.518551  907909 command_runner.go:130] > daemonset.apps/kindnet created
	I0809 18:58:01.522632  907909 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 18:58:01.522738  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:01.522748  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a minikube.k8s.io/name=multinode-814696 minikube.k8s.io/updated_at=2023_08_09T18_58_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:01.529824  907909 command_runner.go:130] > -16
	I0809 18:58:01.529862  907909 ops.go:34] apiserver oom_adj: -16
	I0809 18:58:01.593185  907909 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0809 18:58:01.597299  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:01.604073  907909 command_runner.go:130] > node/multinode-814696 labeled
	I0809 18:58:01.687808  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:01.687913  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:01.791272  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:02.294496  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:02.357178  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:02.794168  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:02.854168  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:03.294185  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:03.357443  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:03.794640  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:03.857809  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:04.294459  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:04.356370  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:04.794523  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:04.855613  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:05.294356  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:05.359431  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:05.793991  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:05.856074  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:06.294308  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:06.356198  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:06.794244  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:06.855466  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:07.294571  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:07.357067  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:07.794737  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:07.858398  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:08.293958  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:08.357303  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:08.794921  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:08.858579  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:09.294133  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:09.355211  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:09.794273  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:09.855979  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:10.294288  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:10.358000  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:10.794638  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:10.855103  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:11.293971  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:11.356831  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:11.794608  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:11.858037  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:12.294678  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:12.369443  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:12.794262  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:12.861717  907909 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0809 18:58:13.294109  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0809 18:58:13.360204  907909 command_runner.go:130] > NAME      SECRETS   AGE
	I0809 18:58:13.360226  907909 command_runner.go:130] > default   0         0s
	I0809 18:58:13.360246  907909 kubeadm.go:1081] duration metric: took 11.83758851s to wait for elevateKubeSystemPrivileges.
	I0809 18:58:13.360270  907909 kubeadm.go:406] StartCluster complete in 22.46461198s
	I0809 18:58:13.360299  907909 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:58:13.360413  907909 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:58:13.361279  907909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:58:13.361534  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 18:58:13.361686  907909 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0809 18:58:13.361763  907909 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:58:13.361787  907909 addons.go:69] Setting storage-provisioner=true in profile "multinode-814696"
	I0809 18:58:13.361815  907909 addons.go:231] Setting addon storage-provisioner=true in "multinode-814696"
	I0809 18:58:13.361791  907909 addons.go:69] Setting default-storageclass=true in profile "multinode-814696"
	I0809 18:58:13.361884  907909 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:58:13.361898  907909 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-814696"
	I0809 18:58:13.361888  907909 host.go:66] Checking if "multinode-814696" exists ...
	I0809 18:58:13.362180  907909 kapi.go:59] client config for multinode-814696: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:58:13.362322  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:58:13.362480  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:58:13.363045  907909 cert_rotation.go:137] Starting client certificate rotation controller
	I0809 18:58:13.363395  907909 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0809 18:58:13.363417  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.363430  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.363444  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.382522  907909 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0809 18:58:13.382560  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.382572  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.382581  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.382596  907909 round_trippers.go:580]     Content-Length: 291
	I0809 18:58:13.382604  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.382613  907909 round_trippers.go:580]     Audit-Id: 1dc9104c-d33d-4c7b-8b5e-8268e777b2c0
	I0809 18:58:13.382622  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.382630  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.382684  907909 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3e8939c-f800-4097-babb-8dcae19cd8ea","resourceVersion":"313","creationTimestamp":"2023-08-09T18:58:00Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0809 18:58:13.383216  907909 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3e8939c-f800-4097-babb-8dcae19cd8ea","resourceVersion":"313","creationTimestamp":"2023-08-09T18:58:00Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0809 18:58:13.383287  907909 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0809 18:58:13.383297  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.383309  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.383319  907909 round_trippers.go:473]     Content-Type: application/json
	I0809 18:58:13.383327  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.389499  907909 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0809 18:58:13.389591  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.389605  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.389615  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.389624  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.389634  907909 round_trippers.go:580]     Content-Length: 291
	I0809 18:58:13.389643  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.389664  907909 round_trippers.go:580]     Audit-Id: e98845ff-26f0-4f2c-8357-6eda80e4cc50
	I0809 18:58:13.389672  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.390219  907909 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:58:13.390534  907909 kapi.go:59] client config for multinode-814696: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:58:13.390958  907909 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0809 18:58:13.390979  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.390984  907909 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3e8939c-f800-4097-babb-8dcae19cd8ea","resourceVersion":"335","creationTimestamp":"2023-08-09T18:58:00Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0809 18:58:13.390991  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.391085  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.391168  907909 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0809 18:58:13.391178  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.391190  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.391202  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.393577  907909 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0809 18:58:13.393607  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:13.395095  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.395111  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.395125  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.395129  907909 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:58:13.395139  907909 round_trippers.go:580]     Content-Length: 291
	I0809 18:58:13.395142  907909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0809 18:58:13.395153  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.395166  907909 round_trippers.go:580]     Audit-Id: 8fa6484f-e3bb-46a9-b700-86ce76b5fe82
	I0809 18:58:13.395177  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.393228  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:13.395211  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.395219  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:58:13.395225  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.395235  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.395245  907909 round_trippers.go:580]     Content-Length: 109
	I0809 18:58:13.395253  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.395261  907909 round_trippers.go:580]     Audit-Id: fe72408c-8a28-40de-a90b-df1333ae9177
	I0809 18:58:13.395187  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.395269  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.395277  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.395297  907909 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3e8939c-f800-4097-babb-8dcae19cd8ea","resourceVersion":"335","creationTimestamp":"2023-08-09T18:58:00Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0809 18:58:13.395303  907909 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"335"},"items":[]}
	I0809 18:58:13.395424  907909 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-814696" context rescaled to 1 replicas
	I0809 18:58:13.395461  907909 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 18:58:13.397029  907909 out.go:177] * Verifying Kubernetes components...
	I0809 18:58:13.395618  907909 addons.go:231] Setting addon default-storageclass=true in "multinode-814696"
	I0809 18:58:13.398346  907909 host.go:66] Checking if "multinode-814696" exists ...
	I0809 18:58:13.398373  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:58:13.398840  907909 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 18:58:13.418550  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:58:13.420808  907909 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0809 18:58:13.420833  907909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0809 18:58:13.420893  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:58:13.436342  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:58:13.483946  907909 command_runner.go:130] > apiVersion: v1
	I0809 18:58:13.483974  907909 command_runner.go:130] > data:
	I0809 18:58:13.483981  907909 command_runner.go:130] >   Corefile: |
	I0809 18:58:13.483987  907909 command_runner.go:130] >     .:53 {
	I0809 18:58:13.483994  907909 command_runner.go:130] >         errors
	I0809 18:58:13.484001  907909 command_runner.go:130] >         health {
	I0809 18:58:13.484010  907909 command_runner.go:130] >            lameduck 5s
	I0809 18:58:13.484015  907909 command_runner.go:130] >         }
	I0809 18:58:13.484021  907909 command_runner.go:130] >         ready
	I0809 18:58:13.484027  907909 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0809 18:58:13.484032  907909 command_runner.go:130] >            pods insecure
	I0809 18:58:13.484037  907909 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0809 18:58:13.484046  907909 command_runner.go:130] >            ttl 30
	I0809 18:58:13.484049  907909 command_runner.go:130] >         }
	I0809 18:58:13.484054  907909 command_runner.go:130] >         prometheus :9153
	I0809 18:58:13.484062  907909 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0809 18:58:13.484067  907909 command_runner.go:130] >            max_concurrent 1000
	I0809 18:58:13.484073  907909 command_runner.go:130] >         }
	I0809 18:58:13.484077  907909 command_runner.go:130] >         cache 30
	I0809 18:58:13.484084  907909 command_runner.go:130] >         loop
	I0809 18:58:13.484088  907909 command_runner.go:130] >         reload
	I0809 18:58:13.484094  907909 command_runner.go:130] >         loadbalance
	I0809 18:58:13.484098  907909 command_runner.go:130] >     }
	I0809 18:58:13.484104  907909 command_runner.go:130] > kind: ConfigMap
	I0809 18:58:13.484108  907909 command_runner.go:130] > metadata:
	I0809 18:58:13.484117  907909 command_runner.go:130] >   creationTimestamp: "2023-08-09T18:58:00Z"
	I0809 18:58:13.484123  907909 command_runner.go:130] >   name: coredns
	I0809 18:58:13.484127  907909 command_runner.go:130] >   namespace: kube-system
	I0809 18:58:13.484134  907909 command_runner.go:130] >   resourceVersion: "228"
	I0809 18:58:13.484138  907909 command_runner.go:130] >   uid: e8373fc7-7687-4dfa-b86f-f388b66db482
	I0809 18:58:13.484280  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0809 18:58:13.484704  907909 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:58:13.485006  907909 kapi.go:59] client config for multinode-814696: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:58:13.485358  907909 node_ready.go:35] waiting up to 6m0s for node "multinode-814696" to be "Ready" ...
	I0809 18:58:13.485454  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:13.485465  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.485478  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.485493  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.487669  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:13.487695  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.487707  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.487717  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.487725  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.487733  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.487740  907909 round_trippers.go:580]     Audit-Id: bda2a637-8342-49b1-b8a5-0b5d0e687d0e
	I0809 18:58:13.487748  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.487873  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:13.488614  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:13.488623  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.488631  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.488638  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:13.490804  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:13.490828  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:13.490839  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:13.490849  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:13.490858  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:13 GMT
	I0809 18:58:13.490866  907909 round_trippers.go:580]     Audit-Id: de433a20-ecf5-43c1-8468-f7a98f8650ee
	I0809 18:58:13.490877  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:13.490889  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:13.491052  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:13.583198  907909 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0809 18:58:13.678411  907909 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0809 18:58:13.991835  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:13.991861  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:13.991873  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:13.991884  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:14.057427  907909 round_trippers.go:574] Response Status: 200 OK in 65 milliseconds
	I0809 18:58:14.057461  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:14.057472  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:14.057482  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:14.057492  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:14.057501  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:14.057518  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:14 GMT
	I0809 18:58:14.057527  907909 round_trippers.go:580]     Audit-Id: 756ef9c4-00d3-4ac5-8a5e-a9ee3a722780
	I0809 18:58:14.057661  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:14.286988  907909 command_runner.go:130] > configmap/coredns replaced
	I0809 18:58:14.291258  907909 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0809 18:58:14.492629  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:14.492651  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:14.492660  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:14.492665  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:14.495039  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:14.495064  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:14.495076  907909 round_trippers.go:580]     Audit-Id: 6656a182-ac01-43d1-88f1-59b93b964044
	I0809 18:58:14.495085  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:14.495097  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:14.495110  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:14.495121  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:14.495133  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:14 GMT
	I0809 18:58:14.495297  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:14.518410  907909 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0809 18:58:14.524067  907909 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0809 18:58:14.530493  907909 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0809 18:58:14.536504  907909 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0809 18:58:14.542485  907909 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0809 18:58:14.550639  907909 command_runner.go:130] > pod/storage-provisioner created
	I0809 18:58:14.555754  907909 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0809 18:58:14.557450  907909 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0809 18:58:14.558716  907909 addons.go:502] enable addons completed in 1.197037219s: enabled=[storage-provisioner default-storageclass]
	I0809 18:58:14.992358  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:14.992381  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:14.992391  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:14.992399  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:14.994758  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:14.994787  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:14.994798  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:14.994806  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:14.994814  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:14.994822  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:14.994832  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:14 GMT
	I0809 18:58:14.994845  907909 round_trippers.go:580]     Audit-Id: a0ee39f5-c6f7-4558-8e24-dd77aef4081e
	I0809 18:58:14.994953  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:15.492669  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:15.492694  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:15.492702  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:15.492711  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:15.495235  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:15.495262  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:15.495275  907909 round_trippers.go:580]     Audit-Id: dd21147a-d9f8-442d-9dd6-d84cc3747433
	I0809 18:58:15.495285  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:15.495294  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:15.495305  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:15.495315  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:15.495331  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:15 GMT
	I0809 18:58:15.495470  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:15.495822  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:15.992009  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:15.992031  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:15.992039  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:15.992046  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:15.994355  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:15.994381  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:15.994392  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:15.994401  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:15 GMT
	I0809 18:58:15.994408  907909 round_trippers.go:580]     Audit-Id: b9247b3d-2058-4230-ba6f-897f9f891ea0
	I0809 18:58:15.994414  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:15.994419  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:15.994424  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:15.994509  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:16.491847  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:16.491877  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:16.491890  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:16.491901  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:16.494294  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:16.494318  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:16.494326  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:16.494332  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:16.494338  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:16 GMT
	I0809 18:58:16.494343  907909 round_trippers.go:580]     Audit-Id: 069643bb-2fa3-4a1f-aa10-c85e50e8eb8e
	I0809 18:58:16.494350  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:16.494359  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:16.494498  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:16.991806  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:16.991828  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:16.991836  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:16.991850  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:16.994005  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:16.994031  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:16.994043  907909 round_trippers.go:580]     Audit-Id: a3da5dc2-100c-4366-b00d-9dedd8fcca6d
	I0809 18:58:16.994051  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:16.994059  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:16.994068  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:16.994081  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:16.994094  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:16 GMT
	I0809 18:58:16.994200  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:17.491769  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:17.491798  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:17.491809  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:17.491816  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:17.494228  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:17.494251  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:17.494259  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:17.494264  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:17.494270  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:17.494275  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:17 GMT
	I0809 18:58:17.494281  907909 round_trippers.go:580]     Audit-Id: 68ae372c-c11a-425d-a569-ec62337b5e11
	I0809 18:58:17.494286  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:17.494417  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:17.992472  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:17.992494  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:17.992502  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:17.992509  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:17.994972  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:17.994995  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:17.995003  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:17.995009  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:17.995015  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:17 GMT
	I0809 18:58:17.995020  907909 round_trippers.go:580]     Audit-Id: 1c76c0e0-28af-4e50-8bd4-6c5c76f08535
	I0809 18:58:17.995026  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:17.995032  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:17.995134  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:17.995453  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:18.491776  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:18.491803  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:18.491814  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:18.491830  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:18.494407  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:18.494428  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:18.494435  907909 round_trippers.go:580]     Audit-Id: 8b176a7f-33da-49c4-87a4-5fed408f6d94
	I0809 18:58:18.494441  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:18.494446  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:18.494451  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:18.494457  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:18.494462  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:18 GMT
	I0809 18:58:18.494614  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:18.991819  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:18.991842  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:18.991853  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:18.991861  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:18.993700  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:18.993720  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:18.993727  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:18.993733  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:18 GMT
	I0809 18:58:18.993739  907909 round_trippers.go:580]     Audit-Id: 586b5adf-0493-434c-a2d4-00db4c48a5ca
	I0809 18:58:18.993748  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:18.993756  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:18.993764  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:18.993865  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:19.492542  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:19.492569  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:19.492591  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:19.492599  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:19.495018  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:19.495038  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:19.495046  907909 round_trippers.go:580]     Audit-Id: ac5efe28-c7ba-4e80-a725-5a21cc5fa97e
	I0809 18:58:19.495053  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:19.495059  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:19.495064  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:19.495070  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:19.495077  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:19 GMT
	I0809 18:58:19.495210  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:19.991829  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:19.991850  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:19.991859  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:19.991865  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:19.994096  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:19.994116  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:19.994123  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:19 GMT
	I0809 18:58:19.994130  907909 round_trippers.go:580]     Audit-Id: da1c248d-9548-4efa-af9b-174828e9c05a
	I0809 18:58:19.994138  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:19.994147  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:19.994155  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:19.994162  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:19.994269  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:20.491913  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:20.491939  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:20.491948  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:20.491954  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:20.494608  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:20.494647  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:20.494659  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:20.494669  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:20 GMT
	I0809 18:58:20.494674  907909 round_trippers.go:580]     Audit-Id: 1b1f67fe-d625-4153-a656-4d07b2ed6242
	I0809 18:58:20.494680  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:20.494685  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:20.494692  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:20.494806  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:20.495124  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:20.992396  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:20.992418  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:20.992427  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:20.992435  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:20.994765  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:20.994791  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:20.994801  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:20 GMT
	I0809 18:58:20.994810  907909 round_trippers.go:580]     Audit-Id: e9da0a90-dcff-4473-b68f-8e8be07198e9
	I0809 18:58:20.994817  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:20.994826  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:20.994837  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:20.994846  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:20.994958  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:21.492617  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:21.492641  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:21.492649  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:21.492655  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:21.495066  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:21.495087  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:21.495094  907909 round_trippers.go:580]     Audit-Id: 7144c0cb-f4ef-4ab3-a126-027e191136df
	I0809 18:58:21.495103  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:21.495111  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:21.495119  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:21.495127  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:21.495136  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:21 GMT
	I0809 18:58:21.495258  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:21.991839  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:21.991862  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:21.991870  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:21.991877  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:21.994252  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:21.994276  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:21.994287  907909 round_trippers.go:580]     Audit-Id: 306a86c7-3f81-45c6-946f-8ebe88e4eceb
	I0809 18:58:21.994296  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:21.994305  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:21.994312  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:21.994319  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:21.994328  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:21 GMT
	I0809 18:58:21.994444  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:22.491819  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:22.491840  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:22.491849  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:22.491855  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:22.494152  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:22.494175  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:22.494184  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:22 GMT
	I0809 18:58:22.494199  907909 round_trippers.go:580]     Audit-Id: 7ed3cc43-349b-47ce-9bf3-20c0d99a3e17
	I0809 18:58:22.494207  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:22.494214  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:22.494224  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:22.494231  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:22.494415  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:22.992038  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:22.992061  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:22.992075  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:22.992082  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:22.994411  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:22.994439  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:22.994450  907909 round_trippers.go:580]     Audit-Id: ce55f496-f5ed-4508-97c8-9625f32a763e
	I0809 18:58:22.994459  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:22.994466  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:22.994473  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:22.994482  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:22.994488  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:22 GMT
	I0809 18:58:22.994604  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:22.994925  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:23.492293  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:23.492315  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:23.492323  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:23.492330  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:23.494596  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:23.494616  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:23.494623  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:23 GMT
	I0809 18:58:23.494629  907909 round_trippers.go:580]     Audit-Id: 046a5c44-f3c3-479b-9dc4-59efe9fe0e27
	I0809 18:58:23.494634  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:23.494639  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:23.494644  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:23.494651  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:23.494803  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:23.992434  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:23.992456  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:23.992464  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:23.992470  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:23.994745  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:23.994772  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:23.994780  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:23.994789  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:23 GMT
	I0809 18:58:23.994794  907909 round_trippers.go:580]     Audit-Id: e78603e3-6a36-47b1-8e8c-c04949411118
	I0809 18:58:23.994800  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:23.994808  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:23.994819  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:23.994940  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:24.492624  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:24.492648  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:24.492656  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:24.492662  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:24.495240  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:24.495274  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:24.495286  907909 round_trippers.go:580]     Audit-Id: ed65dcd2-851c-46ea-b309-9cbc1aee05c4
	I0809 18:58:24.495294  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:24.495303  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:24.495312  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:24.495322  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:24.495331  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:24 GMT
	I0809 18:58:24.495467  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:24.991780  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:24.991803  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:24.991814  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:24.991822  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:24.994071  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:24.994100  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:24.994111  907909 round_trippers.go:580]     Audit-Id: c16a7f6c-af97-4e03-adef-53e7dea64e63
	I0809 18:58:24.994121  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:24.994134  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:24.994143  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:24.994153  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:24.994164  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:24 GMT
	I0809 18:58:24.994300  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:25.491814  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:25.491835  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:25.491843  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:25.491849  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:25.494527  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:25.494553  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:25.494564  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:25.494574  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:25.494583  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:25 GMT
	I0809 18:58:25.494594  907909 round_trippers.go:580]     Audit-Id: dbd074c5-0dd5-49d5-b44f-d200bb898f7f
	I0809 18:58:25.494606  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:25.494613  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:25.494756  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:25.495182  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:25.992385  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:25.992412  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:25.992422  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:25.992430  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:25.995018  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:25.995044  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:25.995055  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:25.995064  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:25.995072  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:25 GMT
	I0809 18:58:25.995080  907909 round_trippers.go:580]     Audit-Id: 0b24a594-20d3-4163-941b-86e077dce768
	I0809 18:58:25.995090  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:25.995103  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:25.995226  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:26.491839  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:26.491866  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:26.491875  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:26.491881  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:26.494245  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:26.494264  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:26.494272  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:26 GMT
	I0809 18:58:26.494278  907909 round_trippers.go:580]     Audit-Id: 5a40c478-a6f5-4701-9754-6a199a9f7abd
	I0809 18:58:26.494285  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:26.494291  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:26.494296  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:26.494302  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:26.494398  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:26.992036  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:26.992058  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:26.992066  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:26.992072  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:26.994397  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:26.994423  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:26.994432  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:26.994440  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:26.994449  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:26 GMT
	I0809 18:58:26.994457  907909 round_trippers.go:580]     Audit-Id: 2b4ba23b-b785-42f7-9af7-618cc0420b58
	I0809 18:58:26.994470  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:26.994482  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:26.994595  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:27.491813  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:27.491834  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:27.491843  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:27.491849  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:27.494135  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:27.494161  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:27.494171  907909 round_trippers.go:580]     Audit-Id: b6e61ee9-693c-4b82-a5a8-84239c8aa187
	I0809 18:58:27.494181  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:27.494191  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:27.494205  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:27.494215  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:27.494221  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:27 GMT
	I0809 18:58:27.494358  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:27.992216  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:27.992238  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:27.992246  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:27.992252  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:27.994747  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:27.994767  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:27.994775  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:27.994785  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:27.994794  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:27 GMT
	I0809 18:58:27.994803  907909 round_trippers.go:580]     Audit-Id: ed3e4fb8-7143-40f1-a4e6-cea7ac12b74c
	I0809 18:58:27.994810  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:27.994822  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:27.994953  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:27.995281  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:28.492540  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:28.492563  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:28.492572  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:28.492578  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:28.494953  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:28.494977  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:28.494988  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:28.494998  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:28.495006  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:28.495014  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:28.495025  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:28 GMT
	I0809 18:58:28.495034  907909 round_trippers.go:580]     Audit-Id: 6ffe8fb2-a830-4087-9fea-cbddbda97257
	I0809 18:58:28.495215  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:28.991760  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:28.991781  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:28.991789  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:28.991795  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:28.993572  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:28.993599  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:28.993610  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:28.993619  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:28.993626  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:28.993635  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:28.993641  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:28 GMT
	I0809 18:58:28.993649  907909 round_trippers.go:580]     Audit-Id: 431f0a9e-0831-4730-9d80-a696b2e54dc3
	I0809 18:58:28.993745  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:29.492402  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:29.492432  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:29.492441  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:29.492447  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:29.494975  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:29.494998  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:29.495005  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:29.495011  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:29.495016  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:29 GMT
	I0809 18:58:29.495022  907909 round_trippers.go:580]     Audit-Id: 56eac2ac-0301-4370-8406-bd73d28a902f
	I0809 18:58:29.495027  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:29.495032  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:29.495174  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:29.992421  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:29.992441  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:29.992449  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:29.992456  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:29.994814  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:29.994841  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:29.994851  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:29.994861  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:29.994869  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:29 GMT
	I0809 18:58:29.994880  907909 round_trippers.go:580]     Audit-Id: f56d2f39-e863-42a8-bc9d-fa5bb69fd3e0
	I0809 18:58:29.994889  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:29.994899  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:29.995016  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:29.995337  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:30.492637  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:30.492661  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:30.492669  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:30.492675  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:30.495006  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:30.495030  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:30.495041  907909 round_trippers.go:580]     Audit-Id: 501711e9-cbaf-420d-b89f-ad44a634acdf
	I0809 18:58:30.495048  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:30.495056  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:30.495064  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:30.495074  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:30.495084  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:30 GMT
	I0809 18:58:30.495217  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:30.991768  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:30.991801  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:30.991809  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:30.991817  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:30.994438  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:30.994457  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:30.994464  907909 round_trippers.go:580]     Audit-Id: b3c74b00-904c-49b5-a7f5-b80057ecd74e
	I0809 18:58:30.994470  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:30.994478  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:30.994486  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:30.994494  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:30.994505  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:30 GMT
	I0809 18:58:30.994609  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:31.492285  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:31.492307  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:31.492316  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:31.492322  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:31.494652  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:31.494678  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:31.494689  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:31.494698  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:31 GMT
	I0809 18:58:31.494707  907909 round_trippers.go:580]     Audit-Id: e5bdef71-1ae6-4da9-80c5-749019f4d277
	I0809 18:58:31.494716  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:31.494728  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:31.494741  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:31.494874  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:31.992439  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:31.992460  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:31.992469  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:31.992475  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:31.994961  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:31.994981  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:31.994990  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:31 GMT
	I0809 18:58:31.994999  907909 round_trippers.go:580]     Audit-Id: 15482b3f-e756-4178-afaf-6bddfefc84be
	I0809 18:58:31.995008  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:31.995015  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:31.995022  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:31.995031  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:31.995127  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:31.995443  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:32.491921  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:32.491944  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:32.491952  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:32.491958  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:32.494352  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:32.494378  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:32.494385  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:32.494392  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:32 GMT
	I0809 18:58:32.494400  907909 round_trippers.go:580]     Audit-Id: 5220a044-aea3-40ac-abf4-560ec0b62fbb
	I0809 18:58:32.494410  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:32.494420  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:32.494433  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:32.494574  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:32.992365  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:32.992391  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:32.992403  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:32.992410  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:32.994703  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:32.994724  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:32.994731  907909 round_trippers.go:580]     Audit-Id: daf5ed5c-9090-4292-ab9a-687e21e3c3d7
	I0809 18:58:32.994737  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:32.994745  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:32.994753  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:32.994763  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:32.994772  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:32 GMT
	I0809 18:58:32.994886  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:33.492509  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:33.492531  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:33.492539  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:33.492545  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:33.494830  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:33.494850  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:33.494857  907909 round_trippers.go:580]     Audit-Id: 47e2a0b6-90d8-4ceb-9fca-0a33cc9d0e0e
	I0809 18:58:33.494864  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:33.494873  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:33.494881  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:33.494889  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:33.494897  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:33 GMT
	I0809 18:58:33.495013  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:33.992685  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:33.992706  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:33.992714  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:33.992720  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:33.995035  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:33.995059  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:33.995070  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:33.995079  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:33.995089  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:33.995098  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:33.995107  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:33 GMT
	I0809 18:58:33.995117  907909 round_trippers.go:580]     Audit-Id: 1156657f-937a-4219-9571-d536a8c03ab2
	I0809 18:58:33.995216  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:33.995547  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:34.491830  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:34.491854  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:34.491866  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:34.491883  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:34.494147  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:34.494168  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:34.494178  907909 round_trippers.go:580]     Audit-Id: 68894fbc-6984-43be-aaa7-76ab126e5560
	I0809 18:58:34.494186  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:34.494199  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:34.494211  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:34.494220  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:34.494233  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:34 GMT
	I0809 18:58:34.494349  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:34.991935  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:34.991958  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:34.991966  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:34.991973  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:34.994307  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:34.994326  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:34.994333  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:34.994338  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:34 GMT
	I0809 18:58:34.994344  907909 round_trippers.go:580]     Audit-Id: e50a859f-f79a-4932-898f-592d83230fb9
	I0809 18:58:34.994349  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:34.994355  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:34.994366  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:34.994495  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:35.492080  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:35.492103  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:35.492111  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:35.492117  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:35.494410  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:35.494434  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:35.494441  907909 round_trippers.go:580]     Audit-Id: 0dd30ee3-7511-4963-8de6-696ddcd88c27
	I0809 18:58:35.494447  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:35.494452  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:35.494458  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:35.494463  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:35.494470  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:35 GMT
	I0809 18:58:35.494595  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:35.992109  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:35.992131  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:35.992139  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:35.992145  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:35.994382  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:35.994408  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:35.994417  907909 round_trippers.go:580]     Audit-Id: 08048870-d535-4375-9a8c-7a7875dd8a1c
	I0809 18:58:35.994423  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:35.994428  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:35.994433  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:35.994439  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:35.994444  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:35 GMT
	I0809 18:58:35.994891  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:36.491863  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:36.491889  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:36.491898  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:36.491904  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:36.494237  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:36.494256  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:36.494264  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:36.494269  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:36 GMT
	I0809 18:58:36.494275  907909 round_trippers.go:580]     Audit-Id: 7e49c455-0197-489c-ad16-bef17a149a47
	I0809 18:58:36.494280  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:36.494286  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:36.494291  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:36.494429  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:36.494782  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:36.991984  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:36.992003  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:36.992011  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:36.992018  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:36.994355  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:36.994383  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:36.994395  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:36.994402  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:36.994408  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:36.994416  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:36.994426  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:36 GMT
	I0809 18:58:36.994439  907909 round_trippers.go:580]     Audit-Id: 8c8b26af-c134-4da7-8f44-e86f2e0d7409
	I0809 18:58:36.994605  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:37.492255  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:37.492276  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:37.492284  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:37.492290  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:37.494618  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:37.494639  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:37.494646  907909 round_trippers.go:580]     Audit-Id: abb7df88-2859-41f6-84c0-ecab4229c112
	I0809 18:58:37.494652  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:37.494658  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:37.494663  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:37.494668  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:37.494675  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:37 GMT
	I0809 18:58:37.494836  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:37.991904  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:37.991932  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:37.991941  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:37.991949  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:37.994274  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:37.994304  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:37.994316  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:37 GMT
	I0809 18:58:37.994325  907909 round_trippers.go:580]     Audit-Id: 34368487-a012-4710-bdb3-dd765afcc47a
	I0809 18:58:37.994335  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:37.994348  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:37.994361  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:37.994373  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:37.994515  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:38.491815  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:38.491838  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:38.491847  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:38.491853  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:38.494268  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:38.494291  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:38.494298  907909 round_trippers.go:580]     Audit-Id: d2e73eea-a624-4fb4-87b1-009bc65a0cb9
	I0809 18:58:38.494306  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:38.494314  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:38.494322  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:38.494331  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:38.494339  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:38 GMT
	I0809 18:58:38.494478  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:38.494874  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:38.991798  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:38.991817  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:38.991825  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:38.991831  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:38.993869  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:38.993890  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:38.993898  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:38.993903  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:38 GMT
	I0809 18:58:38.993909  907909 round_trippers.go:580]     Audit-Id: 5598e3b0-d3dc-4a2a-9831-64dd493e704b
	I0809 18:58:38.993915  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:38.993920  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:38.993931  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:38.994048  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:39.492652  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:39.492679  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:39.492690  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:39.492698  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:39.495043  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:39.495064  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:39.495072  907909 round_trippers.go:580]     Audit-Id: 076161dc-bbea-4b2a-bbdd-a07a43395818
	I0809 18:58:39.495078  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:39.495083  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:39.495088  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:39.495094  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:39.495099  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:39 GMT
	I0809 18:58:39.495232  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:39.991858  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:39.991883  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:39.991892  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:39.991898  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:39.994303  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:39.994325  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:39.994332  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:39.994340  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:39.994348  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:39.994358  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:39.994371  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:39 GMT
	I0809 18:58:39.994384  907909 round_trippers.go:580]     Audit-Id: 27e1b592-4aaf-4e7b-be89-1b0315555a1a
	I0809 18:58:39.994510  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:40.491810  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:40.491831  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:40.491839  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:40.491845  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:40.494151  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:40.494179  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:40.494187  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:40.494193  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:40 GMT
	I0809 18:58:40.494198  907909 round_trippers.go:580]     Audit-Id: e9572cc4-89e1-4aa2-9341-3f914dc26fb5
	I0809 18:58:40.494204  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:40.494209  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:40.494217  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:40.494377  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:40.991817  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:40.991839  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:40.991847  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:40.991854  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:40.994296  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:40.994324  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:40.994336  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:40.994345  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:40.994352  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:40.994361  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:40.994370  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:40 GMT
	I0809 18:58:40.994382  907909 round_trippers.go:580]     Audit-Id: 38636e0f-5b65-49d3-8be6-ffffd711a825
	I0809 18:58:40.994501  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:40.994936  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:41.491835  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:41.491859  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:41.491870  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:41.491878  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:41.494181  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:41.494217  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:41.494230  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:41.494238  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:41.494247  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:41.494256  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:41.494265  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:41 GMT
	I0809 18:58:41.494277  907909 round_trippers.go:580]     Audit-Id: 44e6a3ff-08d9-453a-8336-e1eaba933129
	I0809 18:58:41.494408  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:41.991839  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:41.991862  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:41.991875  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:41.991885  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:41.994286  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:41.994306  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:41.994313  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:41.994319  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:41.994324  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:41 GMT
	I0809 18:58:41.994330  907909 round_trippers.go:580]     Audit-Id: 2a0ec686-8bfc-4c55-a894-a5fa094bc1fe
	I0809 18:58:41.994335  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:41.994340  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:41.994483  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:42.491815  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:42.491836  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:42.491844  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:42.491851  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:42.494149  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:42.494177  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:42.494188  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:42.494196  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:42 GMT
	I0809 18:58:42.494206  907909 round_trippers.go:580]     Audit-Id: f15844d6-809f-4712-b654-8c21eab286af
	I0809 18:58:42.494214  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:42.494229  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:42.494238  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:42.494382  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:42.992083  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:42.992104  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:42.992112  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:42.992118  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:42.994538  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:42.994561  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:42.994569  907909 round_trippers.go:580]     Audit-Id: 4262aa1f-7c1d-40fa-a479-fceafc7ef436
	I0809 18:58:42.994574  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:42.994580  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:42.994585  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:42.994591  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:42.994596  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:42 GMT
	I0809 18:58:42.994712  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:42.995061  907909 node_ready.go:58] node "multinode-814696" has status "Ready":"False"
	I0809 18:58:43.492545  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:43.492578  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:43.492590  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:43.492602  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:43.495015  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:43.495042  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:43.495055  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:43.495065  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:43.495075  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:43.495084  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:43.495093  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:43 GMT
	I0809 18:58:43.495100  907909 round_trippers.go:580]     Audit-Id: d8321a5d-0748-43bf-a68f-5ec3549c9982
	I0809 18:58:43.495236  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:43.991783  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:43.991808  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:43.991816  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:43.991822  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:43.994224  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:43.994254  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:43.994264  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:43.994273  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:43.994282  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:43.994292  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:43.994298  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:43 GMT
	I0809 18:58:43.994306  907909 round_trippers.go:580]     Audit-Id: cd5759d3-3f9d-45e5-9cdd-e17bcbbe7b17
	I0809 18:58:43.994398  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:44.491838  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:44.491863  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:44.491872  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:44.491879  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:44.494687  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:44.494717  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:44.494726  907909 round_trippers.go:580]     Audit-Id: 3bc1cf6e-312b-4d7c-8ce5-d48b17907d83
	I0809 18:58:44.494734  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:44.494742  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:44.494749  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:44.494759  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:44.494769  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:44 GMT
	I0809 18:58:44.494913  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"334","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0809 18:58:44.992525  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:44.992550  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:44.992559  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:44.992569  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:44.994890  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:44.994914  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:44.994924  907909 round_trippers.go:580]     Audit-Id: 82edd8d0-84c6-4dab-a015-8f15cd0f66aa
	I0809 18:58:44.994934  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:44.994944  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:44.994952  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:44.994958  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:44.994968  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:44 GMT
	I0809 18:58:44.995105  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:44.995448  907909 node_ready.go:49] node "multinode-814696" has status "Ready":"True"
	I0809 18:58:44.995465  907909 node_ready.go:38] duration metric: took 31.510081151s waiting for node "multinode-814696" to be "Ready" ...
	I0809 18:58:44.995475  907909 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:58:44.995555  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:58:44.995565  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:44.995576  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:44.995588  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.000908  907909 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0809 18:58:45.000942  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.000954  907909 round_trippers.go:580]     Audit-Id: ed38157a-e982-46d5-ace6-4068f33d51c1
	I0809 18:58:45.000963  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.000971  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.000983  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.000995  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.001008  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.001470  907909 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"393","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0809 18:58:45.005032  907909 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:45.005120  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj6cv
	I0809 18:58:45.005128  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.005136  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.005146  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.007919  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:45.007940  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.007952  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.007960  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.007970  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.007980  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.007997  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.008006  907909 round_trippers.go:580]     Audit-Id: e524e157-bb8f-43d7-9c39-b5e9d73daa1a
	I0809 18:58:45.008127  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"393","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0809 18:58:45.008679  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:45.008693  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.008704  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.008713  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.010979  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:45.011002  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.011012  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.011022  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.011032  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.011040  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.011051  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.011065  907909 round_trippers.go:580]     Audit-Id: e8c33f3c-8f2d-4e5c-822b-553caef5f89e
	I0809 18:58:45.011206  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:45.011673  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj6cv
	I0809 18:58:45.011688  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.011699  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.011710  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.013589  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:45.013609  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.013619  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.013628  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.013641  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.013653  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.013662  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.013671  907909 round_trippers.go:580]     Audit-Id: 6809d272-551e-42cc-87bf-6d99558cdbbc
	I0809 18:58:45.013766  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"393","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0809 18:58:45.014295  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:45.014311  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.014321  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.014330  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.016116  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:45.016138  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.016149  907909 round_trippers.go:580]     Audit-Id: 495b21af-c454-421a-8e99-6f220a04095d
	I0809 18:58:45.016158  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.016166  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.016174  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.016182  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.016191  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.016330  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:45.516954  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj6cv
	I0809 18:58:45.516978  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.516987  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.516993  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.519533  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:45.519553  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.519562  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.519568  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.519574  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.519579  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.519586  907909 round_trippers.go:580]     Audit-Id: 6519e5e0-ddec-43a1-bb58-65e8847f0267
	I0809 18:58:45.519594  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.519750  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"393","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0809 18:58:45.520286  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:45.520302  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:45.520310  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:45.520316  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:45.522571  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:45.522597  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:45.522608  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:45.522617  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:45 GMT
	I0809 18:58:45.522625  907909 round_trippers.go:580]     Audit-Id: a7499da8-639a-44b4-8b9f-66029a8467a5
	I0809 18:58:45.522639  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:45.522648  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:45.522663  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:45.522797  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.017354  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj6cv
	I0809 18:58:46.017377  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.017386  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.017392  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.019902  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.019921  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.019933  907909 round_trippers.go:580]     Audit-Id: 72e9ef64-31dd-4e11-956a-e6df36c24086
	I0809 18:58:46.019942  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.019952  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.019965  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.019976  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.019986  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.020136  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"404","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0809 18:58:46.020735  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.020760  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.020772  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.020782  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.022694  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.022716  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.022728  907909 round_trippers.go:580]     Audit-Id: 3facde78-b383-4f70-8c4d-f24f37fc02c4
	I0809 18:58:46.022740  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.022752  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.022760  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.022765  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.022773  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.022901  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.023299  907909 pod_ready.go:92] pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.023318  907909 pod_ready.go:81] duration metric: took 1.018262574s waiting for pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.023331  907909 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.023392  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-814696
	I0809 18:58:46.023401  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.023413  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.023426  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.025182  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.025204  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.025213  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.025223  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.025232  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.025241  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.025248  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.025254  907909 round_trippers.go:580]     Audit-Id: fea196eb-78c5-42a2-aa21-fdd58d046453
	I0809 18:58:46.025344  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-814696","namespace":"kube-system","uid":"d56666fc-bcce-4c57-9002-5f96937419ef","resourceVersion":"296","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"78f5ed5a72b5cebc9a28edbb5087be98","kubernetes.io/config.mirror":"78f5ed5a72b5cebc9a28edbb5087be98","kubernetes.io/config.seen":"2023-08-09T18:58:00.573681511Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0809 18:58:46.025733  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.025746  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.025754  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.025761  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.027522  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.027540  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.027552  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.027561  907909 round_trippers.go:580]     Audit-Id: 287d32f7-4d16-4fea-bb93-d41929b19af5
	I0809 18:58:46.027570  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.027578  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.027590  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.027602  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.027738  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.028042  907909 pod_ready.go:92] pod "etcd-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.028056  907909 pod_ready.go:81] duration metric: took 4.714177ms waiting for pod "etcd-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.028071  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.028122  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-814696
	I0809 18:58:46.028130  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.028136  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.028143  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.029894  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.029907  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.029914  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.029922  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.029932  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.029941  907909 round_trippers.go:580]     Audit-Id: cd3982f4-7f59-4487-84f0-297bb9c520d5
	I0809 18:58:46.029954  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.029962  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.030157  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-814696","namespace":"kube-system","uid":"80103e38-6b90-40bc-b9b0-dc7f247037c1","resourceVersion":"279","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b5723f67cc7d49c7cfe7e7e252b5ea4b","kubernetes.io/config.mirror":"b5723f67cc7d49c7cfe7e7e252b5ea4b","kubernetes.io/config.seen":"2023-08-09T18:58:00.573685327Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0809 18:58:46.030675  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.030689  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.030700  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.030710  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.032367  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.032382  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.032388  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.032394  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.032399  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.032404  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.032410  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.032415  907909 round_trippers.go:580]     Audit-Id: e6a3178c-9e6d-42ef-8451-3a258f0ce5b4
	I0809 18:58:46.032508  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.032771  907909 pod_ready.go:92] pod "kube-apiserver-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.032782  907909 pod_ready.go:81] duration metric: took 4.700283ms waiting for pod "kube-apiserver-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.032792  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.032831  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-814696
	I0809 18:58:46.032838  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.032844  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.032850  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.034662  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:46.034678  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.034684  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.034690  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.034701  907909 round_trippers.go:580]     Audit-Id: 89aa3fc1-c618-41dc-87ee-9d9a8f6b8558
	I0809 18:58:46.034709  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.034717  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.034726  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.034903  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-814696","namespace":"kube-system","uid":"cc402858-37ab-4592-bb91-ad7df4d9d568","resourceVersion":"289","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ca4303aa62b2bf8ee8c8fbe590c5cf3","kubernetes.io/config.mirror":"0ca4303aa62b2bf8ee8c8fbe590c5cf3","kubernetes.io/config.seen":"2023-08-09T18:58:00.573686726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0809 18:58:46.192570  907909 request.go:628] Waited for 157.249233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.192626  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.192631  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.192639  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.192645  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.195031  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.195060  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.195071  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.195081  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.195089  907909 round_trippers.go:580]     Audit-Id: 52c31062-7aae-407f-af1a-ac86c64fe8dc
	I0809 18:58:46.195096  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.195104  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.195116  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.195251  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.195619  907909 pod_ready.go:92] pod "kube-controller-manager-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.195662  907909 pod_ready.go:81] duration metric: took 162.863426ms waiting for pod "kube-controller-manager-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.195683  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tcmw" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.393123  907909 request.go:628] Waited for 197.349272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tcmw
	I0809 18:58:46.393177  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tcmw
	I0809 18:58:46.393186  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.393196  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.393203  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.395517  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.395536  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.395544  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.395550  907909 round_trippers.go:580]     Audit-Id: 56f130c0-c363-4d8c-9e75-4fdd840ffc64
	I0809 18:58:46.395555  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.395560  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.395566  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.395574  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.395718  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2tcmw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d86217ed-fcd4-4549-9c9c-36742860c3e6","resourceVersion":"375","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8eb73e3e-3a84-4784-aa7b-a41008607142","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8eb73e3e-3a84-4784-aa7b-a41008607142\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0809 18:58:46.593430  907909 request.go:628] Waited for 197.225006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.593491  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.593495  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.593503  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.593509  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.595926  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.595950  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.595958  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.595964  907909 round_trippers.go:580]     Audit-Id: 412f0ea6-c0d3-468e-9dc4-4d5c9e42efaa
	I0809 18:58:46.595969  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.595979  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.595990  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.596003  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.596167  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.596506  907909 pod_ready.go:92] pod "kube-proxy-2tcmw" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.596524  907909 pod_ready.go:81] duration metric: took 400.833738ms waiting for pod "kube-proxy-2tcmw" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.596535  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.792823  907909 request.go:628] Waited for 196.190144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814696
	I0809 18:58:46.792887  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814696
	I0809 18:58:46.792892  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.792900  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.792910  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.795462  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.795486  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.795493  907909 round_trippers.go:580]     Audit-Id: de3b8489-848f-4583-b06a-4a8493088c98
	I0809 18:58:46.795499  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.795507  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.795515  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.795523  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.795531  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.795717  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-814696","namespace":"kube-system","uid":"b55faa0c-0699-4d6b-b004-d6bea8ecd1a8","resourceVersion":"309","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"267a38d43b18369f9e34d21719e40087","kubernetes.io/config.mirror":"267a38d43b18369f9e34d21719e40087","kubernetes.io/config.seen":"2023-08-09T18:58:00.573689109Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0809 18:58:46.993497  907909 request.go:628] Waited for 197.362054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.993554  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:58:46.993559  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:46.993567  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:46.993573  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:46.996319  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:46.996339  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:46.996346  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:46.996355  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:46.996364  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:46.996374  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:46 GMT
	I0809 18:58:46.996387  907909 round_trippers.go:580]     Audit-Id: 7e59dbff-12e2-4264-a544-ff2fdcb9872c
	I0809 18:58:46.996396  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:46.996501  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:58:46.996930  907909 pod_ready.go:92] pod "kube-scheduler-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:58:46.996949  907909 pod_ready.go:81] duration metric: took 400.398759ms waiting for pod "kube-scheduler-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:58:46.996960  907909 pod_ready.go:38] duration metric: took 2.001469624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:58:46.996982  907909 api_server.go:52] waiting for apiserver process to appear ...
	I0809 18:58:46.997039  907909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 18:58:47.006787  907909 command_runner.go:130] > 1442
	I0809 18:58:47.007511  907909 api_server.go:72] duration metric: took 33.612015366s to wait for apiserver process to appear ...
	I0809 18:58:47.007531  907909 api_server.go:88] waiting for apiserver healthz status ...
	I0809 18:58:47.007579  907909 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0809 18:58:47.012690  907909 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0809 18:58:47.012765  907909 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0809 18:58:47.012775  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:47.012788  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:47.012797  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:47.013902  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:58:47.013922  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:47.013931  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:47.013943  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:47.013957  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:47.013968  907909 round_trippers.go:580]     Content-Length: 263
	I0809 18:58:47.013976  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:47 GMT
	I0809 18:58:47.013983  907909 round_trippers.go:580]     Audit-Id: 3cd8f379-7101-4a42-b6bf-6a784f2f2a9a
	I0809 18:58:47.013991  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:47.014012  907909 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0809 18:58:47.014102  907909 api_server.go:141] control plane version: v1.27.4
	I0809 18:58:47.014119  907909 api_server.go:131] duration metric: took 6.583528ms to wait for apiserver health ...
	I0809 18:58:47.014128  907909 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 18:58:47.193535  907909 request.go:628] Waited for 179.328231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:58:47.193608  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:58:47.193616  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:47.193624  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:47.193633  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:47.197328  907909 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0809 18:58:47.197352  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:47.197363  907909 round_trippers.go:580]     Audit-Id: 2bb4bd18-056c-4bcc-a98f-8b0938185d7c
	I0809 18:58:47.197370  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:47.197378  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:47.197386  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:47.197395  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:47.197409  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:47 GMT
	I0809 18:58:47.197917  907909 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"404","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0809 18:58:47.199805  907909 system_pods.go:59] 8 kube-system pods found
	I0809 18:58:47.199832  907909 system_pods.go:61] "coredns-5d78c9869d-zj6cv" [6a7f440d-1020-4de5-9a75-42a2357a6e79] Running
	I0809 18:58:47.199840  907909 system_pods.go:61] "etcd-multinode-814696" [d56666fc-bcce-4c57-9002-5f96937419ef] Running
	I0809 18:58:47.199846  907909 system_pods.go:61] "kindnet-n72x8" [ef0f59b8-8f6f-4043-8edd-b34c75101580] Running
	I0809 18:58:47.199852  907909 system_pods.go:61] "kube-apiserver-multinode-814696" [80103e38-6b90-40bc-b9b0-dc7f247037c1] Running
	I0809 18:58:47.199860  907909 system_pods.go:61] "kube-controller-manager-multinode-814696" [cc402858-37ab-4592-bb91-ad7df4d9d568] Running
	I0809 18:58:47.199867  907909 system_pods.go:61] "kube-proxy-2tcmw" [d86217ed-fcd4-4549-9c9c-36742860c3e6] Running
	I0809 18:58:47.199876  907909 system_pods.go:61] "kube-scheduler-multinode-814696" [b55faa0c-0699-4d6b-b004-d6bea8ecd1a8] Running
	I0809 18:58:47.199887  907909 system_pods.go:61] "storage-provisioner" [8fe3ded6-9715-4d97-8107-25b1ae2c1949] Running
	I0809 18:58:47.199895  907909 system_pods.go:74] duration metric: took 185.76088ms to wait for pod list to return data ...
	I0809 18:58:47.199906  907909 default_sa.go:34] waiting for default service account to be created ...
	I0809 18:58:47.393353  907909 request.go:628] Waited for 193.361429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0809 18:58:47.393426  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0809 18:58:47.393432  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:47.393440  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:47.393452  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:47.395860  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:47.395883  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:47.395895  907909 round_trippers.go:580]     Content-Length: 261
	I0809 18:58:47.395903  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:47 GMT
	I0809 18:58:47.395910  907909 round_trippers.go:580]     Audit-Id: 9502c868-8c69-4a46-9d80-236005cb466c
	I0809 18:58:47.395917  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:47.395925  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:47.395934  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:47.395944  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:47.395973  907909 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bf081258-fafc-4297-a2fe-4e3b8eaffc80","resourceVersion":"304","creationTimestamp":"2023-08-09T18:58:13Z"}}]}
	I0809 18:58:47.396182  907909 default_sa.go:45] found service account: "default"
	I0809 18:58:47.396202  907909 default_sa.go:55] duration metric: took 196.28854ms for default service account to be created ...
	I0809 18:58:47.396212  907909 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 18:58:47.592579  907909 request.go:628] Waited for 196.277898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:58:47.592640  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:58:47.592645  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:47.592653  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:47.592660  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:47.596269  907909 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0809 18:58:47.596304  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:47.596315  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:47.596322  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:47 GMT
	I0809 18:58:47.596328  907909 round_trippers.go:580]     Audit-Id: 0778461e-b709-4949-b3cf-0d02e1688245
	I0809 18:58:47.596337  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:47.596342  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:47.596352  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:47.596762  907909 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"404","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0809 18:58:47.598506  907909 system_pods.go:86] 8 kube-system pods found
	I0809 18:58:47.598537  907909 system_pods.go:89] "coredns-5d78c9869d-zj6cv" [6a7f440d-1020-4de5-9a75-42a2357a6e79] Running
	I0809 18:58:47.598545  907909 system_pods.go:89] "etcd-multinode-814696" [d56666fc-bcce-4c57-9002-5f96937419ef] Running
	I0809 18:58:47.598551  907909 system_pods.go:89] "kindnet-n72x8" [ef0f59b8-8f6f-4043-8edd-b34c75101580] Running
	I0809 18:58:47.598555  907909 system_pods.go:89] "kube-apiserver-multinode-814696" [80103e38-6b90-40bc-b9b0-dc7f247037c1] Running
	I0809 18:58:47.598560  907909 system_pods.go:89] "kube-controller-manager-multinode-814696" [cc402858-37ab-4592-bb91-ad7df4d9d568] Running
	I0809 18:58:47.598564  907909 system_pods.go:89] "kube-proxy-2tcmw" [d86217ed-fcd4-4549-9c9c-36742860c3e6] Running
	I0809 18:58:47.598568  907909 system_pods.go:89] "kube-scheduler-multinode-814696" [b55faa0c-0699-4d6b-b004-d6bea8ecd1a8] Running
	I0809 18:58:47.598574  907909 system_pods.go:89] "storage-provisioner" [8fe3ded6-9715-4d97-8107-25b1ae2c1949] Running
	I0809 18:58:47.598581  907909 system_pods.go:126] duration metric: took 202.363866ms to wait for k8s-apps to be running ...
	I0809 18:58:47.598595  907909 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 18:58:47.598638  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:58:47.609574  907909 system_svc.go:56] duration metric: took 10.970148ms WaitForService to wait for kubelet.
	I0809 18:58:47.609597  907909 kubeadm.go:581] duration metric: took 34.214105637s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 18:58:47.609616  907909 node_conditions.go:102] verifying NodePressure condition ...
	I0809 18:58:47.793216  907909 request.go:628] Waited for 183.51821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0809 18:58:47.793273  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0809 18:58:47.793277  907909 round_trippers.go:469] Request Headers:
	I0809 18:58:47.793285  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:58:47.793292  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:58:47.795946  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:58:47.795968  907909 round_trippers.go:577] Response Headers:
	I0809 18:58:47.795975  907909 round_trippers.go:580]     Audit-Id: 24860696-ab8e-4e20-843d-76d575f1e998
	I0809 18:58:47.795982  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:58:47.795991  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:58:47.795999  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:58:47.796009  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:58:47.796017  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:58:47 GMT
	I0809 18:58:47.796123  907909 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0809 18:58:47.796475  907909 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 18:58:47.796490  907909 node_conditions.go:123] node cpu capacity is 8
	I0809 18:58:47.796502  907909 node_conditions.go:105] duration metric: took 186.882225ms to run NodePressure ...
	I0809 18:58:47.796514  907909 start.go:228] waiting for startup goroutines ...
	I0809 18:58:47.796523  907909 start.go:233] waiting for cluster config update ...
	I0809 18:58:47.796533  907909 start.go:242] writing updated cluster config ...
	I0809 18:58:47.798569  907909 out.go:177] 
	I0809 18:58:47.800048  907909 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:58:47.800138  907909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json ...
	I0809 18:58:47.801921  907909 out.go:177] * Starting worker node multinode-814696-m02 in cluster multinode-814696
	I0809 18:58:47.803325  907909 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:58:47.804728  907909 out.go:177] * Pulling base image ...
	I0809 18:58:47.806359  907909 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:58:47.806379  907909 cache.go:57] Caching tarball of preloaded images
	I0809 18:58:47.806454  907909 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:58:47.806476  907909 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 18:58:47.806485  907909 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 18:58:47.806578  907909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json ...
	I0809 18:58:47.822419  907909 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 18:58:47.822449  907909 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 18:58:47.822471  907909 cache.go:195] Successfully downloaded all kic artifacts
	I0809 18:58:47.822514  907909 start.go:365] acquiring machines lock for multinode-814696-m02: {Name:mk832b7b75e13828e8056d38320c2df525840ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 18:58:47.822637  907909 start.go:369] acquired machines lock for "multinode-814696-m02" in 98.023µs
	I0809 18:58:47.822668  907909 start.go:93] Provisioning new machine with config: &{Name:multinode-814696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0809 18:58:47.822789  907909 start.go:125] createHost starting for "m02" (driver="docker")
	I0809 18:58:47.824651  907909 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0809 18:58:47.824796  907909 start.go:159] libmachine.API.Create for "multinode-814696" (driver="docker")
	I0809 18:58:47.824822  907909 client.go:168] LocalClient.Create starting
	I0809 18:58:47.824897  907909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 18:58:47.824928  907909 main.go:141] libmachine: Decoding PEM data...
	I0809 18:58:47.824945  907909 main.go:141] libmachine: Parsing certificate...
	I0809 18:58:47.824999  907909 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 18:58:47.825019  907909 main.go:141] libmachine: Decoding PEM data...
	I0809 18:58:47.825028  907909 main.go:141] libmachine: Parsing certificate...
	I0809 18:58:47.825219  907909 cli_runner.go:164] Run: docker network inspect multinode-814696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:58:47.840523  907909 network_create.go:76] Found existing network {name:multinode-814696 subnet:0xc00107f4a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0809 18:58:47.840554  907909 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-814696-m02" container
	I0809 18:58:47.840604  907909 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 18:58:47.855587  907909 cli_runner.go:164] Run: docker volume create multinode-814696-m02 --label name.minikube.sigs.k8s.io=multinode-814696-m02 --label created_by.minikube.sigs.k8s.io=true
	I0809 18:58:47.871357  907909 oci.go:103] Successfully created a docker volume multinode-814696-m02
	I0809 18:58:47.871432  907909 cli_runner.go:164] Run: docker run --rm --name multinode-814696-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814696-m02 --entrypoint /usr/bin/test -v multinode-814696-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 18:58:48.326162  907909 oci.go:107] Successfully prepared a docker volume multinode-814696-m02
	I0809 18:58:48.326205  907909 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:58:48.326233  907909 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 18:58:48.326302  907909 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-814696-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 18:58:53.205545  907909 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-814696-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (4.879187628s)
	I0809 18:58:53.205583  907909 kic.go:199] duration metric: took 4.879347 seconds to extract preloaded images to volume
	W0809 18:58:53.205760  907909 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0809 18:58:53.205868  907909 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0809 18:58:53.259199  907909 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-814696-m02 --name multinode-814696-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-814696-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-814696-m02 --network multinode-814696 --ip 192.168.58.3 --volume multinode-814696-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 18:58:53.545399  907909 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Running}}
	I0809 18:58:53.563879  907909 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Status}}
	I0809 18:58:53.581927  907909 cli_runner.go:164] Run: docker exec multinode-814696-m02 stat /var/lib/dpkg/alternatives/iptables
	I0809 18:58:53.645121  907909 oci.go:144] the created container "multinode-814696-m02" has a running status.
	I0809 18:58:53.645159  907909 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa...
	I0809 18:58:53.786568  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0809 18:58:53.786612  907909 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0809 18:58:53.807805  907909 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Status}}
	I0809 18:58:53.827049  907909 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0809 18:58:53.827083  907909 kic_runner.go:114] Args: [docker exec --privileged multinode-814696-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0809 18:58:53.890387  907909 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Status}}
	I0809 18:58:53.909128  907909 machine.go:88] provisioning docker machine ...
	I0809 18:58:53.909178  907909 ubuntu.go:169] provisioning hostname "multinode-814696-m02"
	I0809 18:58:53.909246  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:53.927855  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:58:53.928476  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I0809 18:58:53.928497  907909 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-814696-m02 && echo "multinode-814696-m02" | sudo tee /etc/hostname
	I0809 18:58:53.929187  907909 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53240->127.0.0.1:33487: read: connection reset by peer
	I0809 18:58:57.078821  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-814696-m02
	
	I0809 18:58:57.078928  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:57.095227  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:58:57.095719  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I0809 18:58:57.095740  907909 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-814696-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-814696-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-814696-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 18:58:57.231819  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 18:58:57.231850  907909 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 18:58:57.231877  907909 ubuntu.go:177] setting up certificates
	I0809 18:58:57.231890  907909 provision.go:83] configureAuth start
	I0809 18:58:57.231945  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696-m02
	I0809 18:58:57.248048  907909 provision.go:138] copyHostCerts
	I0809 18:58:57.248090  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:58:57.248117  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 18:58:57.248135  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 18:58:57.248222  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 18:58:57.248312  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:58:57.248331  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 18:58:57.248335  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 18:58:57.248359  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 18:58:57.248627  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:58:57.248671  907909 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 18:58:57.248678  907909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 18:58:57.248767  907909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 18:58:57.248862  907909 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.multinode-814696-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-814696-m02]
	I0809 18:58:57.529748  907909 provision.go:172] copyRemoteCerts
	I0809 18:58:57.529814  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 18:58:57.529850  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:57.545842  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 18:58:57.644419  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0809 18:58:57.644488  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 18:58:57.666125  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0809 18:58:57.666191  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0809 18:58:57.687771  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0809 18:58:57.687840  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 18:58:57.709120  907909 provision.go:86] duration metric: configureAuth took 477.213005ms
	I0809 18:58:57.709147  907909 ubuntu.go:193] setting minikube options for container-runtime
	I0809 18:58:57.709350  907909 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:58:57.709471  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:57.725934  907909 main.go:141] libmachine: Using SSH client type: native
	I0809 18:58:57.726424  907909 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33487 <nil> <nil>}
	I0809 18:58:57.726447  907909 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 18:58:57.951011  907909 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 18:58:57.951042  907909 machine.go:91] provisioned docker machine in 4.041884716s
	I0809 18:58:57.951051  907909 client.go:171] LocalClient.Create took 10.12622435s
	I0809 18:58:57.951070  907909 start.go:167] duration metric: libmachine.API.Create for "multinode-814696" took 10.126274174s
	I0809 18:58:57.951080  907909 start.go:300] post-start starting for "multinode-814696-m02" (driver="docker")
	I0809 18:58:57.951091  907909 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 18:58:57.951150  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 18:58:57.951187  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:57.967247  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 18:58:58.068750  907909 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 18:58:58.071724  907909 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0809 18:58:58.071742  907909 command_runner.go:130] > NAME="Ubuntu"
	I0809 18:58:58.071747  907909 command_runner.go:130] > VERSION_ID="22.04"
	I0809 18:58:58.071753  907909 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0809 18:58:58.071757  907909 command_runner.go:130] > VERSION_CODENAME=jammy
	I0809 18:58:58.071761  907909 command_runner.go:130] > ID=ubuntu
	I0809 18:58:58.071764  907909 command_runner.go:130] > ID_LIKE=debian
	I0809 18:58:58.071769  907909 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0809 18:58:58.071774  907909 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0809 18:58:58.071780  907909 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0809 18:58:58.071786  907909 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0809 18:58:58.071790  907909 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0809 18:58:58.071853  907909 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 18:58:58.071877  907909 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 18:58:58.071888  907909 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 18:58:58.071894  907909 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 18:58:58.071907  907909 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 18:58:58.071957  907909 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 18:58:58.072021  907909 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 18:58:58.072030  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /etc/ssl/certs/8234342.pem
	I0809 18:58:58.072102  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 18:58:58.080074  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:58:58.101964  907909 start.go:303] post-start completed in 150.866628ms
	I0809 18:58:58.102317  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696-m02
	I0809 18:58:58.119108  907909 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/config.json ...
	I0809 18:58:58.119366  907909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 18:58:58.119412  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:58.135901  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 18:58:58.228508  907909 command_runner.go:130] > 22%!
	(MISSING)I0809 18:58:58.228581  907909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 18:58:58.232711  907909 command_runner.go:130] > 228G
	I0809 18:58:58.232917  907909 start.go:128] duration metric: createHost completed in 10.410112094s
	I0809 18:58:58.232935  907909 start.go:83] releasing machines lock for "multinode-814696-m02", held for 10.410286266s
	I0809 18:58:58.232997  907909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696-m02
	I0809 18:58:58.252399  907909 out.go:177] * Found network options:
	I0809 18:58:58.254000  907909 out.go:177]   - NO_PROXY=192.168.58.2
	W0809 18:58:58.255393  907909 proxy.go:119] fail to check proxy env: Error ip not in block
	W0809 18:58:58.255427  907909 proxy.go:119] fail to check proxy env: Error ip not in block
	I0809 18:58:58.255491  907909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 18:58:58.255539  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:58.255582  907909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 18:58:58.255634  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 18:58:58.272651  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 18:58:58.272820  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 18:58:58.452447  907909 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0809 18:58:58.500697  907909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 18:58:58.504785  907909 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0809 18:58:58.504807  907909 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0809 18:58:58.504814  907909 command_runner.go:130] > Device: b0h/176d	Inode: 797218      Links: 1
	I0809 18:58:58.504820  907909 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:58:58.504826  907909 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0809 18:58:58.504830  907909 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0809 18:58:58.504835  907909 command_runner.go:130] > Change: 2023-08-09 18:39:26.869078805 +0000
	I0809 18:58:58.504840  907909 command_runner.go:130] >  Birth: 2023-08-09 18:39:26.869078805 +0000
	I0809 18:58:58.505012  907909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:58:58.521984  907909 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 18:58:58.522076  907909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 18:58:58.549552  907909 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0809 18:58:58.549602  907909 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0809 18:58:58.549611  907909 start.go:466] detecting cgroup driver to use...
	I0809 18:58:58.549643  907909 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 18:58:58.549691  907909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 18:58:58.563940  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 18:58:58.573910  907909 docker.go:196] disabling cri-docker service (if available) ...
	I0809 18:58:58.573965  907909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 18:58:58.586385  907909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 18:58:58.599143  907909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 18:58:58.682826  907909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 18:58:58.697094  907909 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0809 18:58:58.769721  907909 docker.go:212] disabling docker service ...
	I0809 18:58:58.769800  907909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 18:58:58.787703  907909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 18:58:58.798479  907909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 18:58:58.809258  907909 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0809 18:58:58.873572  907909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 18:58:58.954463  907909 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0809 18:58:58.954529  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 18:58:58.965787  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 18:58:58.979821  907909 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0809 18:58:58.980730  907909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0809 18:58:58.980794  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:58:58.991167  907909 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 18:58:58.991243  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:58:59.000422  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:58:59.009556  907909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 18:58:59.018451  907909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 18:58:59.026680  907909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 18:58:59.033495  907909 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0809 18:58:59.034128  907909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 18:58:59.041581  907909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 18:58:59.113299  907909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 18:58:59.210475  907909 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 18:58:59.210552  907909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 18:58:59.214191  907909 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0809 18:58:59.214215  907909 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0809 18:58:59.214224  907909 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0809 18:58:59.214233  907909 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:58:59.214241  907909 command_runner.go:130] > Access: 2023-08-09 18:58:59.198676532 +0000
	I0809 18:58:59.214249  907909 command_runner.go:130] > Modify: 2023-08-09 18:58:59.198676532 +0000
	I0809 18:58:59.214263  907909 command_runner.go:130] > Change: 2023-08-09 18:58:59.198676532 +0000
	I0809 18:58:59.214268  907909 command_runner.go:130] >  Birth: -
	I0809 18:58:59.214292  907909 start.go:534] Will wait 60s for crictl version
	I0809 18:58:59.214338  907909 ssh_runner.go:195] Run: which crictl
	I0809 18:58:59.217202  907909 command_runner.go:130] > /usr/bin/crictl
	I0809 18:58:59.217260  907909 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 18:58:59.248192  907909 command_runner.go:130] > Version:  0.1.0
	I0809 18:58:59.248217  907909 command_runner.go:130] > RuntimeName:  cri-o
	I0809 18:58:59.248224  907909 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0809 18:58:59.248232  907909 command_runner.go:130] > RuntimeApiVersion:  v1
	I0809 18:58:59.250231  907909 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 18:58:59.250305  907909 ssh_runner.go:195] Run: crio --version
	I0809 18:58:59.284542  907909 command_runner.go:130] > crio version 1.24.6
	I0809 18:58:59.284568  907909 command_runner.go:130] > Version:          1.24.6
	I0809 18:58:59.284577  907909 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0809 18:58:59.284584  907909 command_runner.go:130] > GitTreeState:     clean
	I0809 18:58:59.284592  907909 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0809 18:58:59.284602  907909 command_runner.go:130] > GoVersion:        go1.18.2
	I0809 18:58:59.284610  907909 command_runner.go:130] > Compiler:         gc
	I0809 18:58:59.284618  907909 command_runner.go:130] > Platform:         linux/amd64
	I0809 18:58:59.284624  907909 command_runner.go:130] > Linkmode:         dynamic
	I0809 18:58:59.284634  907909 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0809 18:58:59.284641  907909 command_runner.go:130] > SeccompEnabled:   true
	I0809 18:58:59.284645  907909 command_runner.go:130] > AppArmorEnabled:  false
	I0809 18:58:59.284711  907909 ssh_runner.go:195] Run: crio --version
	I0809 18:58:59.315852  907909 command_runner.go:130] > crio version 1.24.6
	I0809 18:58:59.315878  907909 command_runner.go:130] > Version:          1.24.6
	I0809 18:58:59.315888  907909 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0809 18:58:59.315895  907909 command_runner.go:130] > GitTreeState:     clean
	I0809 18:58:59.315904  907909 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0809 18:58:59.315910  907909 command_runner.go:130] > GoVersion:        go1.18.2
	I0809 18:58:59.315917  907909 command_runner.go:130] > Compiler:         gc
	I0809 18:58:59.315924  907909 command_runner.go:130] > Platform:         linux/amd64
	I0809 18:58:59.315933  907909 command_runner.go:130] > Linkmode:         dynamic
	I0809 18:58:59.315950  907909 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0809 18:58:59.315960  907909 command_runner.go:130] > SeccompEnabled:   true
	I0809 18:58:59.315970  907909 command_runner.go:130] > AppArmorEnabled:  false
	I0809 18:58:59.319271  907909 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0809 18:58:59.320660  907909 out.go:177]   - env NO_PROXY=192.168.58.2
	I0809 18:58:59.322053  907909 cli_runner.go:164] Run: docker network inspect multinode-814696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 18:58:59.338504  907909 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0809 18:58:59.341985  907909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:58:59.352125  907909 certs.go:56] Setting up /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696 for IP: 192.168.58.3
	I0809 18:58:59.352151  907909 certs.go:190] acquiring lock for shared ca certs: {Name:mk19b72d6df3cc07014c8108931f9946a7850469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 18:58:59.352303  907909 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key
	I0809 18:58:59.352354  907909 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key
	I0809 18:58:59.352371  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0809 18:58:59.352387  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0809 18:58:59.352399  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0809 18:58:59.352412  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0809 18:58:59.352477  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem (1338 bytes)
	W0809 18:58:59.352515  907909 certs.go:433] ignoring /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434_empty.pem, impossibly tiny 0 bytes
	I0809 18:58:59.352531  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem (1675 bytes)
	I0809 18:58:59.352564  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem (1082 bytes)
	I0809 18:58:59.352600  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem (1123 bytes)
	I0809 18:58:59.352636  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem (1679 bytes)
	I0809 18:58:59.352695  907909 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 18:58:59.352736  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem -> /usr/share/ca-certificates/823434.pem
	I0809 18:58:59.352755  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> /usr/share/ca-certificates/8234342.pem
	I0809 18:58:59.352772  907909 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:58:59.353118  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 18:58:59.374860  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0809 18:58:59.396303  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 18:58:59.417753  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0809 18:58:59.439403  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem --> /usr/share/ca-certificates/823434.pem (1338 bytes)
	I0809 18:58:59.460804  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /usr/share/ca-certificates/8234342.pem (1708 bytes)
	I0809 18:58:59.481526  907909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 18:58:59.502765  907909 ssh_runner.go:195] Run: openssl version
	I0809 18:58:59.507754  907909 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0809 18:58:59.507836  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/823434.pem && ln -fs /usr/share/ca-certificates/823434.pem /etc/ssl/certs/823434.pem"
	I0809 18:58:59.516117  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/823434.pem
	I0809 18:58:59.519134  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 18:58:59.519190  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 18:58:59.519227  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/823434.pem
	I0809 18:58:59.525201  907909 command_runner.go:130] > 51391683
	I0809 18:58:59.525394  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/823434.pem /etc/ssl/certs/51391683.0"
	I0809 18:58:59.533396  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8234342.pem && ln -fs /usr/share/ca-certificates/8234342.pem /etc/ssl/certs/8234342.pem"
	I0809 18:58:59.541645  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8234342.pem
	I0809 18:58:59.544621  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 18:58:59.544670  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 18:58:59.544716  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8234342.pem
	I0809 18:58:59.550524  907909 command_runner.go:130] > 3ec20f2e
	I0809 18:58:59.550691  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8234342.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 18:58:59.558695  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 18:58:59.566854  907909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:58:59.569835  907909 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:58:59.569886  907909 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:58:59.569927  907909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 18:58:59.576099  907909 command_runner.go:130] > b5213941
	I0809 18:58:59.576163  907909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 18:58:59.584345  907909 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 18:58:59.587182  907909 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:58:59.587255  907909 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0809 18:58:59.587332  907909 ssh_runner.go:195] Run: crio config
	I0809 18:58:59.623297  907909 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0809 18:58:59.623326  907909 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0809 18:58:59.623338  907909 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0809 18:58:59.623347  907909 command_runner.go:130] > #
	I0809 18:58:59.623359  907909 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0809 18:58:59.623369  907909 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0809 18:58:59.623379  907909 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0809 18:58:59.623389  907909 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0809 18:58:59.623396  907909 command_runner.go:130] > # reload'.
	I0809 18:58:59.623412  907909 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0809 18:58:59.623425  907909 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0809 18:58:59.623439  907909 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0809 18:58:59.623452  907909 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0809 18:58:59.623461  907909 command_runner.go:130] > [crio]
	I0809 18:58:59.623471  907909 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0809 18:58:59.623479  907909 command_runner.go:130] > # containers images, in this directory.
	I0809 18:58:59.623495  907909 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0809 18:58:59.623506  907909 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0809 18:58:59.623519  907909 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0809 18:58:59.623532  907909 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0809 18:58:59.623546  907909 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0809 18:58:59.623556  907909 command_runner.go:130] > # storage_driver = "vfs"
	I0809 18:58:59.623568  907909 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0809 18:58:59.623581  907909 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0809 18:58:59.623593  907909 command_runner.go:130] > # storage_option = [
	I0809 18:58:59.623599  907909 command_runner.go:130] > # ]
	I0809 18:58:59.623610  907909 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0809 18:58:59.623621  907909 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0809 18:58:59.623632  907909 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0809 18:58:59.623658  907909 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0809 18:58:59.623672  907909 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0809 18:58:59.623680  907909 command_runner.go:130] > # always happen on a node reboot
	I0809 18:58:59.623692  907909 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0809 18:58:59.623703  907909 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0809 18:58:59.623717  907909 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0809 18:58:59.623733  907909 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0809 18:58:59.623750  907909 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0809 18:58:59.623764  907909 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0809 18:58:59.623778  907909 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0809 18:58:59.623788  907909 command_runner.go:130] > # internal_wipe = true
	I0809 18:58:59.623801  907909 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0809 18:58:59.623815  907909 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0809 18:58:59.623827  907909 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0809 18:58:59.623839  907909 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0809 18:58:59.623852  907909 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0809 18:58:59.623858  907909 command_runner.go:130] > [crio.api]
	I0809 18:58:59.623873  907909 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0809 18:58:59.623880  907909 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0809 18:58:59.623891  907909 command_runner.go:130] > # IP address on which the stream server will listen.
	I0809 18:58:59.623899  907909 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0809 18:58:59.623913  907909 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0809 18:58:59.623926  907909 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0809 18:58:59.623933  907909 command_runner.go:130] > # stream_port = "0"
	I0809 18:58:59.623942  907909 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0809 18:58:59.623949  907909 command_runner.go:130] > # stream_enable_tls = false
	I0809 18:58:59.623961  907909 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0809 18:58:59.623972  907909 command_runner.go:130] > # stream_idle_timeout = ""
	I0809 18:58:59.623983  907909 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0809 18:58:59.623997  907909 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0809 18:58:59.624005  907909 command_runner.go:130] > # minutes.
	I0809 18:58:59.624047  907909 command_runner.go:130] > # stream_tls_cert = ""
	I0809 18:58:59.624062  907909 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0809 18:58:59.624073  907909 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0809 18:58:59.624083  907909 command_runner.go:130] > # stream_tls_key = ""
	I0809 18:58:59.624093  907909 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0809 18:58:59.624107  907909 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0809 18:58:59.624119  907909 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0809 18:58:59.624128  907909 command_runner.go:130] > # stream_tls_ca = ""
	I0809 18:58:59.624137  907909 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0809 18:58:59.624148  907909 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0809 18:58:59.624165  907909 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0809 18:58:59.624175  907909 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0809 18:58:59.624195  907909 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0809 18:58:59.624208  907909 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0809 18:58:59.624217  907909 command_runner.go:130] > [crio.runtime]
	I0809 18:58:59.624229  907909 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0809 18:58:59.624242  907909 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0809 18:58:59.624252  907909 command_runner.go:130] > # "nofile=1024:2048"
	I0809 18:58:59.624263  907909 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0809 18:58:59.624273  907909 command_runner.go:130] > # default_ulimits = [
	I0809 18:58:59.624279  907909 command_runner.go:130] > # ]
	I0809 18:58:59.624293  907909 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0809 18:58:59.624302  907909 command_runner.go:130] > # no_pivot = false
	I0809 18:58:59.624312  907909 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0809 18:58:59.624326  907909 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0809 18:58:59.624337  907909 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0809 18:58:59.624351  907909 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0809 18:58:59.624362  907909 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0809 18:58:59.624375  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0809 18:58:59.624384  907909 command_runner.go:130] > # conmon = ""
	I0809 18:58:59.624392  907909 command_runner.go:130] > # Cgroup setting for conmon
	I0809 18:58:59.624407  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0809 18:58:59.624417  907909 command_runner.go:130] > conmon_cgroup = "pod"
	I0809 18:58:59.624428  907909 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0809 18:58:59.624441  907909 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0809 18:58:59.624455  907909 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0809 18:58:59.624466  907909 command_runner.go:130] > # conmon_env = [
	I0809 18:58:59.624472  907909 command_runner.go:130] > # ]
	I0809 18:58:59.624484  907909 command_runner.go:130] > # Additional environment variables to set for all the
	I0809 18:58:59.624496  907909 command_runner.go:130] > # containers. These are overridden if set in the
	I0809 18:58:59.624509  907909 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0809 18:58:59.624519  907909 command_runner.go:130] > # default_env = [
	I0809 18:58:59.624525  907909 command_runner.go:130] > # ]
	I0809 18:58:59.624535  907909 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0809 18:58:59.624545  907909 command_runner.go:130] > # selinux = false
	I0809 18:58:59.624555  907909 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0809 18:58:59.624568  907909 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0809 18:58:59.624580  907909 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0809 18:58:59.624600  907909 command_runner.go:130] > # seccomp_profile = ""
	I0809 18:58:59.624608  907909 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0809 18:58:59.624616  907909 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0809 18:58:59.624638  907909 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0809 18:58:59.624645  907909 command_runner.go:130] > # which might increase security.
	I0809 18:58:59.624652  907909 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0809 18:58:59.624668  907909 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0809 18:58:59.624679  907909 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0809 18:58:59.624691  907909 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0809 18:58:59.624705  907909 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0809 18:58:59.624717  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:58:59.624726  907909 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0809 18:58:59.624734  907909 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0809 18:58:59.624741  907909 command_runner.go:130] > # the cgroup blockio controller.
	I0809 18:58:59.624750  907909 command_runner.go:130] > # blockio_config_file = ""
	I0809 18:58:59.624759  907909 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0809 18:58:59.624765  907909 command_runner.go:130] > # irqbalance daemon.
	I0809 18:58:59.624771  907909 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0809 18:58:59.624779  907909 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0809 18:58:59.624787  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:58:59.624791  907909 command_runner.go:130] > # rdt_config_file = ""
	I0809 18:58:59.624800  907909 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0809 18:58:59.624806  907909 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0809 18:58:59.624812  907909 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0809 18:58:59.624819  907909 command_runner.go:130] > # separate_pull_cgroup = ""
	I0809 18:58:59.624825  907909 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0809 18:58:59.624833  907909 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0809 18:58:59.624840  907909 command_runner.go:130] > # will be added.
	I0809 18:58:59.624844  907909 command_runner.go:130] > # default_capabilities = [
	I0809 18:58:59.624850  907909 command_runner.go:130] > # 	"CHOWN",
	I0809 18:58:59.624854  907909 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0809 18:58:59.624860  907909 command_runner.go:130] > # 	"FSETID",
	I0809 18:58:59.624864  907909 command_runner.go:130] > # 	"FOWNER",
	I0809 18:58:59.624870  907909 command_runner.go:130] > # 	"SETGID",
	I0809 18:58:59.624874  907909 command_runner.go:130] > # 	"SETUID",
	I0809 18:58:59.624906  907909 command_runner.go:130] > # 	"SETPCAP",
	I0809 18:58:59.624913  907909 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0809 18:58:59.624917  907909 command_runner.go:130] > # 	"KILL",
	I0809 18:58:59.624920  907909 command_runner.go:130] > # ]
	I0809 18:58:59.624928  907909 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0809 18:58:59.624937  907909 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0809 18:58:59.624942  907909 command_runner.go:130] > # add_inheritable_capabilities = true
	I0809 18:58:59.624954  907909 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0809 18:58:59.624966  907909 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0809 18:58:59.624974  907909 command_runner.go:130] > # default_sysctls = [
	I0809 18:58:59.624981  907909 command_runner.go:130] > # ]
	I0809 18:58:59.624986  907909 command_runner.go:130] > # List of devices on the host that a
	I0809 18:58:59.624994  907909 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0809 18:58:59.625000  907909 command_runner.go:130] > # allowed_devices = [
	I0809 18:58:59.625005  907909 command_runner.go:130] > # 	"/dev/fuse",
	I0809 18:58:59.625010  907909 command_runner.go:130] > # ]
	I0809 18:58:59.625015  907909 command_runner.go:130] > # List of additional devices. specified as
	I0809 18:58:59.625035  907909 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0809 18:58:59.625047  907909 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0809 18:58:59.625061  907909 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0809 18:58:59.625071  907909 command_runner.go:130] > # additional_devices = [
	I0809 18:58:59.625076  907909 command_runner.go:130] > # ]
	I0809 18:58:59.625083  907909 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0809 18:58:59.625088  907909 command_runner.go:130] > # cdi_spec_dirs = [
	I0809 18:58:59.625094  907909 command_runner.go:130] > # 	"/etc/cdi",
	I0809 18:58:59.625098  907909 command_runner.go:130] > # 	"/var/run/cdi",
	I0809 18:58:59.625103  907909 command_runner.go:130] > # ]
	I0809 18:58:59.625110  907909 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0809 18:58:59.625118  907909 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0809 18:58:59.625124  907909 command_runner.go:130] > # Defaults to false.
	I0809 18:58:59.625134  907909 command_runner.go:130] > # device_ownership_from_security_context = false
	I0809 18:58:59.625148  907909 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0809 18:58:59.625162  907909 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0809 18:58:59.625171  907909 command_runner.go:130] > # hooks_dir = [
	I0809 18:58:59.625176  907909 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0809 18:58:59.625182  907909 command_runner.go:130] > # ]
	I0809 18:58:59.625187  907909 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0809 18:58:59.625195  907909 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0809 18:58:59.625201  907909 command_runner.go:130] > # its default mounts from the following two files:
	I0809 18:58:59.625206  907909 command_runner.go:130] > #
	I0809 18:58:59.625215  907909 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0809 18:58:59.625229  907909 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0809 18:58:59.625243  907909 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0809 18:58:59.625251  907909 command_runner.go:130] > #
	I0809 18:58:59.625265  907909 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0809 18:58:59.625278  907909 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0809 18:58:59.625286  907909 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0809 18:58:59.625293  907909 command_runner.go:130] > #      only add mounts it finds in this file.
	I0809 18:58:59.625297  907909 command_runner.go:130] > #
	I0809 18:58:59.625307  907909 command_runner.go:130] > # default_mounts_file = ""
	I0809 18:58:59.625320  907909 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0809 18:58:59.625334  907909 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0809 18:58:59.625344  907909 command_runner.go:130] > # pids_limit = 0
	I0809 18:58:59.625358  907909 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0809 18:58:59.625372  907909 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0809 18:58:59.625381  907909 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0809 18:58:59.625399  907909 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0809 18:58:59.625410  907909 command_runner.go:130] > # log_size_max = -1
	I0809 18:58:59.625425  907909 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0809 18:58:59.625436  907909 command_runner.go:130] > # log_to_journald = false
	I0809 18:58:59.625449  907909 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0809 18:58:59.625460  907909 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0809 18:58:59.625469  907909 command_runner.go:130] > # Path to directory for container attach sockets.
	I0809 18:58:59.625476  907909 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0809 18:58:59.625489  907909 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0809 18:58:59.625499  907909 command_runner.go:130] > # bind_mount_prefix = ""
	I0809 18:58:59.625512  907909 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0809 18:58:59.625521  907909 command_runner.go:130] > # read_only = false
	I0809 18:58:59.625535  907909 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0809 18:58:59.625547  907909 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0809 18:58:59.625554  907909 command_runner.go:130] > # live configuration reload.
	I0809 18:58:59.625561  907909 command_runner.go:130] > # log_level = "info"
	I0809 18:58:59.625574  907909 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0809 18:58:59.625586  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:58:59.625595  907909 command_runner.go:130] > # log_filter = ""
	I0809 18:58:59.625608  907909 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0809 18:58:59.625623  907909 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0809 18:58:59.625632  907909 command_runner.go:130] > # separated by comma.
	I0809 18:58:59.625638  907909 command_runner.go:130] > # uid_mappings = ""
	I0809 18:58:59.625647  907909 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0809 18:58:59.625660  907909 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0809 18:58:59.625671  907909 command_runner.go:130] > # separated by comma.
	I0809 18:58:59.625680  907909 command_runner.go:130] > # gid_mappings = ""
	I0809 18:58:59.625693  907909 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0809 18:58:59.625726  907909 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0809 18:58:59.625740  907909 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0809 18:58:59.625755  907909 command_runner.go:130] > # minimum_mappable_uid = -1
	I0809 18:58:59.625770  907909 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0809 18:58:59.625784  907909 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0809 18:58:59.625797  907909 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0809 18:58:59.625807  907909 command_runner.go:130] > # minimum_mappable_gid = -1
	I0809 18:58:59.625815  907909 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0809 18:58:59.625828  907909 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0809 18:58:59.625841  907909 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0809 18:58:59.625851  907909 command_runner.go:130] > # ctr_stop_timeout = 30
	I0809 18:58:59.625861  907909 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0809 18:58:59.625874  907909 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0809 18:58:59.625886  907909 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0809 18:58:59.625896  907909 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0809 18:58:59.625902  907909 command_runner.go:130] > # drop_infra_ctr = true
	I0809 18:58:59.625913  907909 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0809 18:58:59.625926  907909 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0809 18:58:59.625941  907909 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0809 18:58:59.625951  907909 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0809 18:58:59.625962  907909 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0809 18:58:59.625974  907909 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0809 18:58:59.625982  907909 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0809 18:58:59.625991  907909 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0809 18:58:59.626001  907909 command_runner.go:130] > # pinns_path = ""
	I0809 18:58:59.626015  907909 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0809 18:58:59.626029  907909 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0809 18:58:59.626042  907909 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0809 18:58:59.626054  907909 command_runner.go:130] > # default_runtime = "runc"
	I0809 18:58:59.626063  907909 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0809 18:58:59.626073  907909 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0809 18:58:59.626092  907909 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0809 18:58:59.626104  907909 command_runner.go:130] > # creation as a file is not desired either.
	I0809 18:58:59.626121  907909 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0809 18:58:59.626133  907909 command_runner.go:130] > # the hostname is being managed dynamically.
	I0809 18:58:59.626144  907909 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0809 18:58:59.626152  907909 command_runner.go:130] > # ]
	I0809 18:58:59.626158  907909 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0809 18:58:59.626171  907909 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0809 18:58:59.626186  907909 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0809 18:58:59.626200  907909 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0809 18:58:59.626209  907909 command_runner.go:130] > #
	I0809 18:58:59.626220  907909 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0809 18:58:59.626231  907909 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0809 18:58:59.626240  907909 command_runner.go:130] > #  runtime_type = "oci"
	I0809 18:58:59.626248  907909 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0809 18:58:59.626259  907909 command_runner.go:130] > #  privileged_without_host_devices = false
	I0809 18:58:59.626270  907909 command_runner.go:130] > #  allowed_annotations = []
	I0809 18:58:59.626279  907909 command_runner.go:130] > # Where:
	I0809 18:58:59.626291  907909 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0809 18:58:59.626304  907909 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0809 18:58:59.626318  907909 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0809 18:58:59.626329  907909 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0809 18:58:59.626334  907909 command_runner.go:130] > #   in $PATH.
	I0809 18:58:59.626343  907909 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0809 18:58:59.626355  907909 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0809 18:58:59.626369  907909 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0809 18:58:59.626377  907909 command_runner.go:130] > #   state.
	I0809 18:58:59.626392  907909 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0809 18:58:59.626405  907909 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0809 18:58:59.626416  907909 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0809 18:58:59.626424  907909 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0809 18:58:59.626435  907909 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0809 18:58:59.626486  907909 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0809 18:58:59.626502  907909 command_runner.go:130] > #   The currently recognized values are:
	I0809 18:58:59.626515  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0809 18:58:59.626531  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0809 18:58:59.626544  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0809 18:58:59.626583  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0809 18:58:59.626594  907909 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0809 18:58:59.626619  907909 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0809 18:58:59.626633  907909 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0809 18:58:59.626643  907909 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0809 18:58:59.626654  907909 command_runner.go:130] > #   should be moved to the container's cgroup
	I0809 18:58:59.626666  907909 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0809 18:58:59.626677  907909 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0809 18:58:59.626687  907909 command_runner.go:130] > runtime_type = "oci"
	I0809 18:58:59.626697  907909 command_runner.go:130] > runtime_root = "/run/runc"
	I0809 18:58:59.626706  907909 command_runner.go:130] > runtime_config_path = ""
	I0809 18:58:59.626712  907909 command_runner.go:130] > monitor_path = ""
	I0809 18:58:59.626722  907909 command_runner.go:130] > monitor_cgroup = ""
	I0809 18:58:59.626731  907909 command_runner.go:130] > monitor_exec_cgroup = ""
	I0809 18:58:59.626783  907909 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0809 18:58:59.626790  907909 command_runner.go:130] > # running containers
	I0809 18:58:59.626795  907909 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0809 18:58:59.626803  907909 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0809 18:58:59.626811  907909 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0809 18:58:59.626819  907909 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0809 18:58:59.626826  907909 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0809 18:58:59.626831  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0809 18:58:59.626837  907909 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0809 18:58:59.626842  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0809 18:58:59.626849  907909 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0809 18:58:59.626853  907909 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0809 18:58:59.626862  907909 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0809 18:58:59.626868  907909 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0809 18:58:59.626876  907909 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0809 18:58:59.626886  907909 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0809 18:58:59.626895  907909 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0809 18:58:59.626902  907909 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0809 18:58:59.626912  907909 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0809 18:58:59.626921  907909 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0809 18:58:59.626927  907909 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0809 18:58:59.626936  907909 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0809 18:58:59.626941  907909 command_runner.go:130] > # Example:
	I0809 18:58:59.626946  907909 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0809 18:58:59.626953  907909 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0809 18:58:59.626958  907909 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0809 18:58:59.626966  907909 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0809 18:58:59.626972  907909 command_runner.go:130] > # cpuset = 0
	I0809 18:58:59.626976  907909 command_runner.go:130] > # cpushares = "0-1"
	I0809 18:58:59.626982  907909 command_runner.go:130] > # Where:
	I0809 18:58:59.626987  907909 command_runner.go:130] > # The workload name is workload-type.
	I0809 18:58:59.626996  907909 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0809 18:58:59.627003  907909 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0809 18:58:59.627011  907909 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0809 18:58:59.627018  907909 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0809 18:58:59.627026  907909 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0809 18:58:59.627033  907909 command_runner.go:130] > # 
	I0809 18:58:59.627040  907909 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0809 18:58:59.627045  907909 command_runner.go:130] > #
	I0809 18:58:59.627050  907909 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0809 18:58:59.627058  907909 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0809 18:58:59.627067  907909 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0809 18:58:59.627075  907909 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0809 18:58:59.627081  907909 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0809 18:58:59.627087  907909 command_runner.go:130] > [crio.image]
	I0809 18:58:59.627092  907909 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0809 18:58:59.627099  907909 command_runner.go:130] > # default_transport = "docker://"
	I0809 18:58:59.627105  907909 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0809 18:58:59.627112  907909 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0809 18:58:59.627119  907909 command_runner.go:130] > # global_auth_file = ""
	I0809 18:58:59.627124  907909 command_runner.go:130] > # The image used to instantiate infra containers.
	I0809 18:58:59.627131  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:58:59.627136  907909 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0809 18:58:59.627144  907909 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0809 18:58:59.627152  907909 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0809 18:58:59.627159  907909 command_runner.go:130] > # This option supports live configuration reload.
	I0809 18:58:59.627164  907909 command_runner.go:130] > # pause_image_auth_file = ""
	I0809 18:58:59.627172  907909 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0809 18:58:59.627184  907909 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0809 18:58:59.627192  907909 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0809 18:58:59.627198  907909 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0809 18:58:59.627204  907909 command_runner.go:130] > # pause_command = "/pause"
	I0809 18:58:59.627210  907909 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0809 18:58:59.627219  907909 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0809 18:58:59.627227  907909 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0809 18:58:59.627235  907909 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0809 18:58:59.627243  907909 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0809 18:58:59.627249  907909 command_runner.go:130] > # signature_policy = ""
	I0809 18:58:59.627255  907909 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0809 18:58:59.627263  907909 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0809 18:58:59.627269  907909 command_runner.go:130] > # changing them here.
	I0809 18:58:59.627273  907909 command_runner.go:130] > # insecure_registries = [
	I0809 18:58:59.627279  907909 command_runner.go:130] > # ]
	I0809 18:58:59.627285  907909 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0809 18:58:59.627292  907909 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0809 18:58:59.627296  907909 command_runner.go:130] > # image_volumes = "mkdir"
	I0809 18:58:59.627303  907909 command_runner.go:130] > # Temporary directory to use for storing big files
	I0809 18:58:59.627311  907909 command_runner.go:130] > # big_files_temporary_dir = ""
	I0809 18:58:59.627316  907909 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0809 18:58:59.627322  907909 command_runner.go:130] > # CNI plugins.
	I0809 18:58:59.627326  907909 command_runner.go:130] > [crio.network]
	I0809 18:58:59.627334  907909 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0809 18:58:59.627341  907909 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0809 18:58:59.627345  907909 command_runner.go:130] > # cni_default_network = ""
	I0809 18:58:59.627353  907909 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0809 18:58:59.627357  907909 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0809 18:58:59.627364  907909 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0809 18:58:59.627369  907909 command_runner.go:130] > # plugin_dirs = [
	I0809 18:58:59.627376  907909 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0809 18:58:59.627379  907909 command_runner.go:130] > # ]
	I0809 18:58:59.627390  907909 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0809 18:58:59.627397  907909 command_runner.go:130] > [crio.metrics]
	I0809 18:58:59.627402  907909 command_runner.go:130] > # Globally enable or disable metrics support.
	I0809 18:58:59.627408  907909 command_runner.go:130] > # enable_metrics = false
	I0809 18:58:59.627413  907909 command_runner.go:130] > # Specify enabled metrics collectors.
	I0809 18:58:59.627419  907909 command_runner.go:130] > # Per default all metrics are enabled.
	I0809 18:58:59.627425  907909 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0809 18:58:59.627433  907909 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0809 18:58:59.627441  907909 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0809 18:58:59.627447  907909 command_runner.go:130] > # metrics_collectors = [
	I0809 18:58:59.627451  907909 command_runner.go:130] > # 	"operations",
	I0809 18:58:59.627456  907909 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0809 18:58:59.627462  907909 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0809 18:58:59.627467  907909 command_runner.go:130] > # 	"operations_errors",
	I0809 18:58:59.627473  907909 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0809 18:58:59.627477  907909 command_runner.go:130] > # 	"image_pulls_by_name",
	I0809 18:58:59.627484  907909 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0809 18:58:59.627488  907909 command_runner.go:130] > # 	"image_pulls_failures",
	I0809 18:58:59.627495  907909 command_runner.go:130] > # 	"image_pulls_successes",
	I0809 18:58:59.627499  907909 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0809 18:58:59.627505  907909 command_runner.go:130] > # 	"image_layer_reuse",
	I0809 18:58:59.627509  907909 command_runner.go:130] > # 	"containers_oom_total",
	I0809 18:58:59.627515  907909 command_runner.go:130] > # 	"containers_oom",
	I0809 18:58:59.627518  907909 command_runner.go:130] > # 	"processes_defunct",
	I0809 18:58:59.627525  907909 command_runner.go:130] > # 	"operations_total",
	I0809 18:58:59.627529  907909 command_runner.go:130] > # 	"operations_latency_seconds",
	I0809 18:58:59.627536  907909 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0809 18:58:59.627540  907909 command_runner.go:130] > # 	"operations_errors_total",
	I0809 18:58:59.627547  907909 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0809 18:58:59.627552  907909 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0809 18:58:59.627559  907909 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0809 18:58:59.627563  907909 command_runner.go:130] > # 	"image_pulls_success_total",
	I0809 18:58:59.627569  907909 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0809 18:58:59.627574  907909 command_runner.go:130] > # 	"containers_oom_count_total",
	I0809 18:58:59.627580  907909 command_runner.go:130] > # ]
	I0809 18:58:59.627585  907909 command_runner.go:130] > # The port on which the metrics server will listen.
	I0809 18:58:59.627591  907909 command_runner.go:130] > # metrics_port = 9090
	I0809 18:58:59.627596  907909 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0809 18:58:59.627602  907909 command_runner.go:130] > # metrics_socket = ""
	I0809 18:58:59.627607  907909 command_runner.go:130] > # The certificate for the secure metrics server.
	I0809 18:58:59.627615  907909 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0809 18:58:59.627623  907909 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0809 18:58:59.627628  907909 command_runner.go:130] > # certificate on any modification event.
	I0809 18:58:59.627652  907909 command_runner.go:130] > # metrics_cert = ""
	I0809 18:58:59.627664  907909 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0809 18:58:59.627675  907909 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0809 18:58:59.627680  907909 command_runner.go:130] > # metrics_key = ""
	I0809 18:58:59.627688  907909 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0809 18:58:59.627692  907909 command_runner.go:130] > [crio.tracing]
	I0809 18:58:59.627700  907909 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0809 18:58:59.627704  907909 command_runner.go:130] > # enable_tracing = false
	I0809 18:58:59.627711  907909 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0809 18:58:59.627722  907909 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0809 18:58:59.627730  907909 command_runner.go:130] > # Number of samples to collect per million spans.
	I0809 18:58:59.627740  907909 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0809 18:58:59.627757  907909 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0809 18:58:59.627767  907909 command_runner.go:130] > [crio.stats]
	I0809 18:58:59.627776  907909 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0809 18:58:59.627784  907909 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0809 18:58:59.627790  907909 command_runner.go:130] > # stats_collection_period = 0
	I0809 18:58:59.627828  907909 command_runner.go:130] ! time="2023-08-09 18:58:59.621129242Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0809 18:58:59.627840  907909 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0809 18:58:59.627923  907909 cni.go:84] Creating CNI manager for ""
	I0809 18:58:59.627938  907909 cni.go:136] 2 nodes found, recommending kindnet
	I0809 18:58:59.627946  907909 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 18:58:59.627965  907909 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-814696 NodeName:multinode-814696-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 18:58:59.628075  907909 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-814696-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 18:58:59.628136  907909 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-814696-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 18:58:59.628193  907909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 18:58:59.636911  907909 command_runner.go:130] > kubeadm
	I0809 18:58:59.636931  907909 command_runner.go:130] > kubectl
	I0809 18:58:59.636935  907909 command_runner.go:130] > kubelet
	I0809 18:58:59.636957  907909 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 18:58:59.637007  907909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0809 18:58:59.644773  907909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0809 18:58:59.660502  907909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 18:58:59.676540  907909 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0809 18:58:59.679808  907909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0809 18:58:59.690497  907909 host.go:66] Checking if "multinode-814696" exists ...
	I0809 18:58:59.690754  907909 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:58:59.690793  907909 start.go:301] JoinCluster: &{Name:multinode-814696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-814696 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:58:59.690882  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0809 18:58:59.690924  907909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 18:58:59.708127  907909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 18:58:59.854217  907909 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token aqokka.bcp7sbijhm6yhc3z --discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c 
	I0809 18:58:59.858673  907909 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0809 18:58:59.858723  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aqokka.bcp7sbijhm6yhc3z --discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-814696-m02"
	I0809 18:58:59.893483  907909 command_runner.go:130] ! W0809 18:58:59.892994    1108 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0809 18:58:59.923604  907909 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0809 18:58:59.988467  907909 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0809 18:59:02.113436  907909 command_runner.go:130] > [preflight] Running pre-flight checks
	I0809 18:59:02.113461  907909 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0809 18:59:02.113468  907909 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1038-gcp
	I0809 18:59:02.113476  907909 command_runner.go:130] > OS: Linux
	I0809 18:59:02.113481  907909 command_runner.go:130] > CGROUPS_CPU: enabled
	I0809 18:59:02.113487  907909 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0809 18:59:02.113492  907909 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0809 18:59:02.113497  907909 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0809 18:59:02.113505  907909 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0809 18:59:02.113514  907909 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0809 18:59:02.113527  907909 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0809 18:59:02.113541  907909 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0809 18:59:02.113547  907909 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0809 18:59:02.113554  907909 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0809 18:59:02.113562  907909 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0809 18:59:02.113568  907909 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0809 18:59:02.113574  907909 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0809 18:59:02.113579  907909 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0809 18:59:02.113587  907909 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0809 18:59:02.113592  907909 command_runner.go:130] > This node has joined the cluster:
	I0809 18:59:02.113603  907909 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0809 18:59:02.113614  907909 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0809 18:59:02.113625  907909 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0809 18:59:02.113647  907909 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token aqokka.bcp7sbijhm6yhc3z --discovery-token-ca-cert-hash sha256:85b611322a15d151ff0bcc3d793970c57c3eadea4a879931fb9494c00472255c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-814696-m02": (2.254908978s)
	I0809 18:59:02.113665  907909 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0809 18:59:02.203348  907909 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0809 18:59:02.278539  907909 start.go:303] JoinCluster complete in 2.587737428s
	I0809 18:59:02.278571  907909 cni.go:84] Creating CNI manager for ""
	I0809 18:59:02.278578  907909 cni.go:136] 2 nodes found, recommending kindnet
	I0809 18:59:02.278641  907909 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0809 18:59:02.282358  907909 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0809 18:59:02.282379  907909 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0809 18:59:02.282386  907909 command_runner.go:130] > Device: 37h/55d	Inode: 800976      Links: 1
	I0809 18:59:02.282392  907909 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0809 18:59:02.282398  907909 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0809 18:59:02.282403  907909 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0809 18:59:02.282408  907909 command_runner.go:130] > Change: 2023-08-09 18:39:27.249115629 +0000
	I0809 18:59:02.282412  907909 command_runner.go:130] >  Birth: 2023-08-09 18:39:27.225113304 +0000
	I0809 18:59:02.282484  907909 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0809 18:59:02.282494  907909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0809 18:59:02.299253  907909 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0809 18:59:02.545670  907909 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0809 18:59:02.549075  907909 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0809 18:59:02.551618  907909 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0809 18:59:02.562044  907909 command_runner.go:130] > daemonset.apps/kindnet configured
	I0809 18:59:02.566195  907909 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:59:02.566412  907909 kapi.go:59] client config for multinode-814696: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:59:02.566785  907909 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0809 18:59:02.566797  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:02.566805  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:02.566811  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:02.568761  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:02.568783  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:02.568795  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:02.568805  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:02.568814  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:02.568825  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:02.568834  907909 round_trippers.go:580]     Content-Length: 291
	I0809 18:59:02.568841  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:02 GMT
	I0809 18:59:02.568848  907909 round_trippers.go:580]     Audit-Id: aa7cd355-39d1-42b1-92c3-4ddad17904be
	I0809 18:59:02.568880  907909 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c3e8939c-f800-4097-babb-8dcae19cd8ea","resourceVersion":"408","creationTimestamp":"2023-08-09T18:58:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0809 18:59:02.568989  907909 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-814696" context rescaled to 1 replicas
	I0809 18:59:02.569024  907909 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0809 18:59:02.572073  907909 out.go:177] * Verifying Kubernetes components...
	I0809 18:59:02.573533  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:59:02.586333  907909 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:59:02.586545  907909 kapi.go:59] client config for multinode-814696: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/multinode-814696/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 18:59:02.586775  907909 node_ready.go:35] waiting up to 6m0s for node "multinode-814696-m02" to be "Ready" ...
	I0809 18:59:02.586835  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:02.586842  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:02.586849  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:02.586858  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:02.588829  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:02.588847  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:02.588854  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:02.588860  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:02.588865  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:02.588870  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:02.588878  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:02 GMT
	I0809 18:59:02.588887  907909 round_trippers.go:580]     Audit-Id: d433d4ac-fa15-454f-866b-210bdddc7800
	I0809 18:59:02.589002  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"451","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5210 chars]
	I0809 18:59:02.589367  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:02.589380  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:02.589387  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:02.589397  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:02.591373  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:02.591392  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:02.591402  907909 round_trippers.go:580]     Audit-Id: 3d4d6c6f-626c-4115-9770-6d6fa594d2eb
	I0809 18:59:02.591411  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:02.591420  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:02.591432  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:02.591443  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:02.591455  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:02 GMT
	I0809 18:59:02.591544  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"451","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5210 chars]
	I0809 18:59:03.092234  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:03.092256  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:03.092267  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:03.092277  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:03.094641  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:03.094671  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:03.094683  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:03.094692  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:03 GMT
	I0809 18:59:03.094702  907909 round_trippers.go:580]     Audit-Id: c36d96d9-4f71-426e-a181-61c5c2f48021
	I0809 18:59:03.094710  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:03.094716  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:03.094721  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:03.094851  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"451","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5210 chars]
	I0809 18:59:03.592336  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:03.592362  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:03.592375  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:03.592385  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:03.594879  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:03.594905  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:03.594917  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:03 GMT
	I0809 18:59:03.594927  907909 round_trippers.go:580]     Audit-Id: f835f5ae-e72b-4be8-9d3e-00dcbd98d6b8
	I0809 18:59:03.594936  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:03.594945  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:03.594954  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:03.594960  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:03.595085  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"451","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5210 chars]
	I0809 18:59:04.092752  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:04.092779  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.092791  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.092802  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.095134  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.095159  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.095169  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.095184  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.095193  907909 round_trippers.go:580]     Audit-Id: 4790152d-e273-419a-95fc-1b2bb315c414
	I0809 18:59:04.095205  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.095217  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.095229  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.095401  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"466","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0809 18:59:04.095776  907909 node_ready.go:49] node "multinode-814696-m02" has status "Ready":"True"
	I0809 18:59:04.095796  907909 node_ready.go:38] duration metric: took 1.509003109s waiting for node "multinode-814696-m02" to be "Ready" ...
	I0809 18:59:04.095807  907909 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:59:04.095870  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0809 18:59:04.095880  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.095891  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.095901  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.098976  907909 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0809 18:59:04.098996  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.099005  907909 round_trippers.go:580]     Audit-Id: 60197826-4fc1-4507-a8e1-6f569d57a5d0
	I0809 18:59:04.099014  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.099023  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.099036  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.099049  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.099065  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.100566  907909 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"466"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"404","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0809 18:59:04.103336  907909 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.103422  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj6cv
	I0809 18:59:04.103432  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.103440  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.103446  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.105629  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.105646  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.105652  907909 round_trippers.go:580]     Audit-Id: 8e4fae2c-b363-48e1-b967-d38d95a8699b
	I0809 18:59:04.105658  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.105663  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.105669  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.105682  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.105694  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.105796  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj6cv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"6a7f440d-1020-4de5-9a75-42a2357a6e79","resourceVersion":"404","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e9343197-b525-42fe-987e-679e974b9989","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9343197-b525-42fe-987e-679e974b9989\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0809 18:59:04.106212  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.106223  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.106231  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.106237  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.108409  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.108425  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.108435  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.108444  907909 round_trippers.go:580]     Audit-Id: 95964593-e106-4805-b259-9328a9b9d829
	I0809 18:59:04.108453  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.108462  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.108470  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.108475  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.108615  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:04.108950  907909 pod_ready.go:92] pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.108965  907909 pod_ready.go:81] duration metric: took 5.606742ms waiting for pod "coredns-5d78c9869d-zj6cv" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.108972  907909 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.109026  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-814696
	I0809 18:59:04.109034  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.109041  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.109047  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.110658  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.110673  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.110680  907909 round_trippers.go:580]     Audit-Id: 12684db3-ff16-4b4a-9873-70b7691d8b8e
	I0809 18:59:04.110685  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.110691  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.110696  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.110701  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.110706  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.110842  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-814696","namespace":"kube-system","uid":"d56666fc-bcce-4c57-9002-5f96937419ef","resourceVersion":"296","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"78f5ed5a72b5cebc9a28edbb5087be98","kubernetes.io/config.mirror":"78f5ed5a72b5cebc9a28edbb5087be98","kubernetes.io/config.seen":"2023-08-09T18:58:00.573681511Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0809 18:59:04.111164  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.111174  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.111180  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.111186  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.112841  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.112856  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.112865  907909 round_trippers.go:580]     Audit-Id: 7b1e4fdb-9dc8-461d-9050-5247524fc332
	I0809 18:59:04.112874  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.112883  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.112893  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.112904  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.112912  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.113061  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:04.113368  907909 pod_ready.go:92] pod "etcd-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.113385  907909 pod_ready.go:81] duration metric: took 4.406355ms waiting for pod "etcd-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.113398  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.113445  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-814696
	I0809 18:59:04.113459  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.113466  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.113472  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.115222  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.115236  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.115242  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.115248  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.115253  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.115261  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.115269  907909 round_trippers.go:580]     Audit-Id: 9712b243-e0e3-497a-8113-a1f778c8126b
	I0809 18:59:04.115280  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.115419  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-814696","namespace":"kube-system","uid":"80103e38-6b90-40bc-b9b0-dc7f247037c1","resourceVersion":"279","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b5723f67cc7d49c7cfe7e7e252b5ea4b","kubernetes.io/config.mirror":"b5723f67cc7d49c7cfe7e7e252b5ea4b","kubernetes.io/config.seen":"2023-08-09T18:58:00.573685327Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0809 18:59:04.115857  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.115870  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.115877  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.115884  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.117534  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.117555  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.117565  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.117575  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.117585  907909 round_trippers.go:580]     Audit-Id: e2da9914-0acd-4b8c-94c8-07965cae6094
	I0809 18:59:04.117595  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.117605  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.117622  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.117745  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:04.118036  907909 pod_ready.go:92] pod "kube-apiserver-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.118050  907909 pod_ready.go:81] duration metric: took 4.640642ms waiting for pod "kube-apiserver-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.118058  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.118101  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-814696
	I0809 18:59:04.118108  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.118115  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.118121  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.119682  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.119707  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.119717  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.119728  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.119740  907909 round_trippers.go:580]     Audit-Id: 4808aa21-9a2f-4a72-9a74-cf4c48122942
	I0809 18:59:04.119750  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.119760  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.119772  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.119917  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-814696","namespace":"kube-system","uid":"cc402858-37ab-4592-bb91-ad7df4d9d568","resourceVersion":"289","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ca4303aa62b2bf8ee8c8fbe590c5cf3","kubernetes.io/config.mirror":"0ca4303aa62b2bf8ee8c8fbe590c5cf3","kubernetes.io/config.seen":"2023-08-09T18:58:00.573686726Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0809 18:59:04.120324  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.120339  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.120346  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.120356  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.122182  907909 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0809 18:59:04.122202  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.122208  907909 round_trippers.go:580]     Audit-Id: da26daa7-53f8-4407-b3db-772065dab071
	I0809 18:59:04.122214  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.122221  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.122229  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.122246  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.122255  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.122364  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:04.122642  907909 pod_ready.go:92] pod "kube-controller-manager-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.122657  907909 pod_ready.go:81] duration metric: took 4.594163ms waiting for pod "kube-controller-manager-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.122666  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2tcmw" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.293041  907909 request.go:628] Waited for 170.302642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tcmw
	I0809 18:59:04.293120  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2tcmw
	I0809 18:59:04.293127  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.293140  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.293156  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.295512  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.295539  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.295550  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.295558  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.295567  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.295574  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.295583  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.295600  907909 round_trippers.go:580]     Audit-Id: 3358cd07-176d-41b3-af29-2ecffe34fb87
	I0809 18:59:04.295749  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2tcmw","generateName":"kube-proxy-","namespace":"kube-system","uid":"d86217ed-fcd4-4549-9c9c-36742860c3e6","resourceVersion":"375","creationTimestamp":"2023-08-09T18:58:13Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8eb73e3e-3a84-4784-aa7b-a41008607142","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8eb73e3e-3a84-4784-aa7b-a41008607142\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0809 18:59:04.493598  907909 request.go:628] Waited for 197.367656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.493652  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:04.493657  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.493664  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.493670  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.495949  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.495971  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.495978  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.495987  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.495995  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.496004  907909 round_trippers.go:580]     Audit-Id: 7f2769a5-0548-4584-9249-ef0390f74160
	I0809 18:59:04.496012  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.496024  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.496165  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:04.496504  907909 pod_ready.go:92] pod "kube-proxy-2tcmw" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.496520  907909 pod_ready.go:81] duration metric: took 373.848286ms waiting for pod "kube-proxy-2tcmw" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.496530  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxp4p" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.692921  907909 request.go:628] Waited for 196.308858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxp4p
	I0809 18:59:04.693002  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxp4p
	I0809 18:59:04.693012  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.693020  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.693027  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.695498  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.695522  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.695533  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.695542  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.695549  907909 round_trippers.go:580]     Audit-Id: f8217233-48cf-4a85-b77f-ee846ee43b96
	I0809 18:59:04.695558  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.695567  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.695576  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.695736  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nxp4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"48d083c2-5665-4997-8988-e9a279083a6c","resourceVersion":"460","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8eb73e3e-3a84-4784-aa7b-a41008607142","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8eb73e3e-3a84-4784-aa7b-a41008607142\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0809 18:59:04.893531  907909 request.go:628] Waited for 197.347079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:04.893614  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696-m02
	I0809 18:59:04.893620  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:04.893629  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:04.893636  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:04.896066  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:04.896092  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:04.896109  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:04.896119  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:04.896126  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:04.896132  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:04.896140  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:04 GMT
	I0809 18:59:04.896149  907909 round_trippers.go:580]     Audit-Id: 1ee93aa0-c562-4acf-a276-36eac75a3726
	I0809 18:59:04.896288  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696-m02","uid":"d61b8f34-c68d-45d6-875f-ee98db5389e4","resourceVersion":"466","creationTimestamp":"2023-08-09T18:59:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:59:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0809 18:59:04.896715  907909 pod_ready.go:92] pod "kube-proxy-nxp4p" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:04.896733  907909 pod_ready.go:81] duration metric: took 400.190165ms waiting for pod "kube-proxy-nxp4p" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:04.896744  907909 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:05.093240  907909 request.go:628] Waited for 196.378007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814696
	I0809 18:59:05.093302  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-814696
	I0809 18:59:05.093308  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:05.093316  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:05.093323  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:05.095801  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:05.095826  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:05.095837  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:05.095847  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:05.095856  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:05 GMT
	I0809 18:59:05.095865  907909 round_trippers.go:580]     Audit-Id: a77cce1d-cf9d-4485-b92e-06c27661576e
	I0809 18:59:05.095875  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:05.095886  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:05.096000  907909 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-814696","namespace":"kube-system","uid":"b55faa0c-0699-4d6b-b004-d6bea8ecd1a8","resourceVersion":"309","creationTimestamp":"2023-08-09T18:58:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"267a38d43b18369f9e34d21719e40087","kubernetes.io/config.mirror":"267a38d43b18369f9e34d21719e40087","kubernetes.io/config.seen":"2023-08-09T18:58:00.573689109Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-09T18:58:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0809 18:59:05.293753  907909 request.go:628] Waited for 197.34394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:05.293825  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-814696
	I0809 18:59:05.293831  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:05.293839  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:05.293845  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:05.296137  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:05.296155  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:05.296162  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:05 GMT
	I0809 18:59:05.296170  907909 round_trippers.go:580]     Audit-Id: 1f2c3163-07f2-4800-b5a4-8914f2a8925a
	I0809 18:59:05.296175  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:05.296183  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:05.296189  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:05.296203  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:05.296305  907909 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-09T18:57:57Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0809 18:59:05.296641  907909 pod_ready.go:92] pod "kube-scheduler-multinode-814696" in "kube-system" namespace has status "Ready":"True"
	I0809 18:59:05.296656  907909 pod_ready.go:81] duration metric: took 399.905856ms waiting for pod "kube-scheduler-multinode-814696" in "kube-system" namespace to be "Ready" ...
	I0809 18:59:05.296668  907909 pod_ready.go:38] duration metric: took 1.200847656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 18:59:05.296684  907909 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 18:59:05.296730  907909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 18:59:05.307602  907909 system_svc.go:56] duration metric: took 10.902475ms WaitForService to wait for kubelet.
	I0809 18:59:05.307656  907909 kubeadm.go:581] duration metric: took 2.73858556s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 18:59:05.307687  907909 node_conditions.go:102] verifying NodePressure condition ...
	I0809 18:59:05.493109  907909 request.go:628] Waited for 185.340046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0809 18:59:05.493174  907909 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0809 18:59:05.493178  907909 round_trippers.go:469] Request Headers:
	I0809 18:59:05.493186  907909 round_trippers.go:473]     Accept: application/json, */*
	I0809 18:59:05.493194  907909 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0809 18:59:05.495678  907909 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0809 18:59:05.495702  907909 round_trippers.go:577] Response Headers:
	I0809 18:59:05.495709  907909 round_trippers.go:580]     Content-Type: application/json
	I0809 18:59:05.495715  907909 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dd0d608f-dc63-40f4-9b4d-99223f610f68
	I0809 18:59:05.495721  907909 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0963476f-69fe-46e0-9d79-fcc57ab77616
	I0809 18:59:05.495726  907909 round_trippers.go:580]     Date: Wed, 09 Aug 2023 18:59:05 GMT
	I0809 18:59:05.495735  907909 round_trippers.go:580]     Audit-Id: f7b53ba6-92b8-4645-8acd-f671b55b425f
	I0809 18:59:05.495745  907909 round_trippers.go:580]     Cache-Control: no-cache, private
	I0809 18:59:05.495948  907909 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"multinode-814696","uid":"fd507105-e42e-47af-bf66-72a7b2e220d5","resourceVersion":"388","creationTimestamp":"2023-08-09T18:57:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-814696","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e286a113bb5db20a65222adef757d15268cdbb1a","minikube.k8s.io/name":"multinode-814696","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_09T18_58_01_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0809 18:59:05.496636  907909 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 18:59:05.496657  907909 node_conditions.go:123] node cpu capacity is 8
	I0809 18:59:05.496669  907909 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 18:59:05.496673  907909 node_conditions.go:123] node cpu capacity is 8
	I0809 18:59:05.496676  907909 node_conditions.go:105] duration metric: took 188.984492ms to run NodePressure ...
	I0809 18:59:05.496689  907909 start.go:228] waiting for startup goroutines ...
	I0809 18:59:05.496722  907909 start.go:242] writing updated cluster config ...
	I0809 18:59:05.497098  907909 ssh_runner.go:195] Run: rm -f paused
	I0809 18:59:05.545199  907909 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0809 18:59:05.547618  907909 out.go:177] * Done! kubectl is now configured to use "multinode-814696" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 09 18:58:45 multinode-814696 crio[959]: time="2023-08-09 18:58:45.074797359Z" level=info msg="Created container a290e816ccef95e12bfc2497541d76e9ea9a39ff83d5e42281a73c0c89d35af4: kube-system/storage-provisioner/storage-provisioner" id=35ebb6c1-bbee-4f6d-9899-698df65f1dc3 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 18:58:45 multinode-814696 crio[959]: time="2023-08-09 18:58:45.074925936Z" level=info msg="Starting container: 1518cb45ef4e2bf539e17c144b1eb0765140692ba834b4973d040f1d43f702e5" id=d08d7476-21ac-49ce-b275-5a58c9f075e7 name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 18:58:45 multinode-814696 crio[959]: time="2023-08-09 18:58:45.075203553Z" level=info msg="Starting container: a290e816ccef95e12bfc2497541d76e9ea9a39ff83d5e42281a73c0c89d35af4" id=c8c00a3f-96ce-4666-b83d-2507ec4ee96d name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 18:58:45 multinode-814696 crio[959]: time="2023-08-09 18:58:45.084008288Z" level=info msg="Started container" PID=2343 containerID=a290e816ccef95e12bfc2497541d76e9ea9a39ff83d5e42281a73c0c89d35af4 description=kube-system/storage-provisioner/storage-provisioner id=c8c00a3f-96ce-4666-b83d-2507ec4ee96d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5945488410872a1b9dd6125a5783deae06a4d48f4b16ab5556705e6753ac66c
	Aug 09 18:58:45 multinode-814696 crio[959]: time="2023-08-09 18:58:45.084275564Z" level=info msg="Started container" PID=2344 containerID=1518cb45ef4e2bf539e17c144b1eb0765140692ba834b4973d040f1d43f702e5 description=kube-system/coredns-5d78c9869d-zj6cv/coredns id=d08d7476-21ac-49ce-b275-5a58c9f075e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93d79633d02163e5b235fdd60953be450a5d0901d3e7576fd3e8fd841a07252e
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.590787929Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-wvdrx/POD" id=94de7ddf-78b0-4043-8235-448c03652489 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.590862218Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.605192377Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-wvdrx Namespace:default ID:3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08 UID:15a75c58-d029-44d9-bc0e-f3d5976471ef NetNS:/var/run/netns/e81ebca8-8ed1-4510-9688-7a66a380d92d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.605237014Z" level=info msg="Adding pod default_busybox-67b7f59bb-wvdrx to CNI network \"kindnet\" (type=ptp)"
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.613963308Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-wvdrx Namespace:default ID:3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08 UID:15a75c58-d029-44d9-bc0e-f3d5976471ef NetNS:/var/run/netns/e81ebca8-8ed1-4510-9688-7a66a380d92d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.614078942Z" level=info msg="Checking pod default_busybox-67b7f59bb-wvdrx for CNI network kindnet (type=ptp)"
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.634701550Z" level=info msg="Ran pod sandbox 3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08 with infra container: default/busybox-67b7f59bb-wvdrx/POD" id=94de7ddf-78b0-4043-8235-448c03652489 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.635823624Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a2a2af00-8720-4811-a541-f530c758eba3 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.636060193Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=a2a2af00-8720-4811-a541-f530c758eba3 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.636818799Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=48e62276-b554-4584-8fcc-affc812f95b0 name=/runtime.v1.ImageService/PullImage
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.642632574Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 09 18:59:06 multinode-814696 crio[959]: time="2023-08-09 18:59:06.905497343Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.490829348Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=48e62276-b554-4584-8fcc-affc812f95b0 name=/runtime.v1.ImageService/PullImage
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.492091963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=da1d83e7-2a92-47b8-bc22-a4b4ea3eac99 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.492927312Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da1d83e7-2a92-47b8-bc22-a4b4ea3eac99 name=/runtime.v1.ImageService/ImageStatus
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.493809691Z" level=info msg="Creating container: default/busybox-67b7f59bb-wvdrx/busybox" id=5bbb588f-7250-4d44-bb62-d68c7809fc47 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.493906244Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.584178580Z" level=info msg="Created container a4460940044a8fcfa64206b70a8ecf0173d5ab1a568c72da3c7a9566a248b32e: default/busybox-67b7f59bb-wvdrx/busybox" id=5bbb588f-7250-4d44-bb62-d68c7809fc47 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.584903453Z" level=info msg="Starting container: a4460940044a8fcfa64206b70a8ecf0173d5ab1a568c72da3c7a9566a248b32e" id=4c140c49-c1fc-47fb-8e66-ad15397b99e5 name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 18:59:07 multinode-814696 crio[959]: time="2023-08-09 18:59:07.593557381Z" level=info msg="Started container" PID=2514 containerID=a4460940044a8fcfa64206b70a8ecf0173d5ab1a568c72da3c7a9566a248b32e description=default/busybox-67b7f59bb-wvdrx/busybox id=4c140c49-c1fc-47fb-8e66-ad15397b99e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a4460940044a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   3f58f76545395       busybox-67b7f59bb-wvdrx
	1518cb45ef4e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      26 seconds ago       Running             coredns                   0                   93d79633d0216       coredns-5d78c9869d-zj6cv
	a290e816ccef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      26 seconds ago       Running             storage-provisioner       0                   d594548841087       storage-provisioner
	ee8fd3704b585       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      58 seconds ago       Running             kindnet-cni               0                   4d955d42ac47e       kindnet-n72x8
	04e93892b1013       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                      58 seconds ago       Running             kube-proxy                0                   be281f0e4c4f5       kube-proxy-2tcmw
	2ee55013df0a9       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                      About a minute ago   Running             kube-scheduler            0                   f978ab4dd55fc       kube-scheduler-multinode-814696
	f9ce1d33945a4       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                      About a minute ago   Running             kube-apiserver            0                   c74bfba434ad6       kube-apiserver-multinode-814696
	c3bcae54127fb       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                      About a minute ago   Running             kube-controller-manager   0                   4ff4b44a92e20       kube-controller-manager-multinode-814696
	0406b83f8c1f1       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   63e8e2845af86       etcd-multinode-814696
	
	* 
	* ==> coredns [1518cb45ef4e2bf539e17c144b1eb0765140692ba834b4973d040f1d43f702e5] <==
	* [INFO] 10.244.1.2:41549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117034s
	[INFO] 10.244.0.3:35019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126461s
	[INFO] 10.244.0.3:43928 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001432832s
	[INFO] 10.244.0.3:47386 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008559s
	[INFO] 10.244.0.3:55992 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006742s
	[INFO] 10.244.0.3:53516 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000927031s
	[INFO] 10.244.0.3:33545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063249s
	[INFO] 10.244.0.3:46783 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058533s
	[INFO] 10.244.0.3:34258 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000036443s
	[INFO] 10.244.1.2:47869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120512s
	[INFO] 10.244.1.2:33027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092409s
	[INFO] 10.244.1.2:43569 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081039s
	[INFO] 10.244.1.2:47416 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051994s
	[INFO] 10.244.0.3:44864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110041s
	[INFO] 10.244.0.3:53198 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090956s
	[INFO] 10.244.0.3:58712 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058027s
	[INFO] 10.244.0.3:58259 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039945s
	[INFO] 10.244.1.2:51141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107886s
	[INFO] 10.244.1.2:50759 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146599s
	[INFO] 10.244.1.2:33595 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117874s
	[INFO] 10.244.1.2:34247 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082277s
	[INFO] 10.244.0.3:56992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094308s
	[INFO] 10.244.0.3:44047 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000070434s
	[INFO] 10.244.0.3:59312 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058803s
	[INFO] 10.244.0.3:43525 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000053347s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-814696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-814696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=multinode-814696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T18_58_01_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:57:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-814696
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:59:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:58:44 +0000   Wed, 09 Aug 2023 18:57:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:58:44 +0000   Wed, 09 Aug 2023 18:57:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:58:44 +0000   Wed, 09 Aug 2023 18:57:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:58:44 +0000   Wed, 09 Aug 2023 18:58:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-814696
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d7211d10d0e4b0485952f572db04744
	  System UUID:                f9dea788-0431-4f2b-9205-8c70030fa417
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-wvdrx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5d78c9869d-zj6cv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     59s
	  kube-system                 etcd-multinode-814696                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-n72x8                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-814696             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-814696    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-2tcmw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-814696             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 72s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s   kubelet          Node multinode-814696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s   kubelet          Node multinode-814696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s   kubelet          Node multinode-814696 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           60s   node-controller  Node multinode-814696 event: Registered Node multinode-814696 in Controller
	  Normal  NodeReady                28s   kubelet          Node multinode-814696 status is now: NodeReady
	
	
	Name:               multinode-814696-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-814696-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 18:59:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-814696-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 18:59:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 18:59:03 +0000   Wed, 09 Aug 2023 18:59:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 18:59:03 +0000   Wed, 09 Aug 2023 18:59:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 18:59:03 +0000   Wed, 09 Aug 2023 18:59:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 18:59:03 +0000   Wed, 09 Aug 2023 18:59:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-814696-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7604401d39e4a45b7b61b9850ca455c
	  System UUID:                d07c6f96-9746-4faf-a61b-e17ffbad2d7c
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-jxlzc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-fm85q              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-nxp4p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 12s)  kubelet          Node multinode-814696-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 12s)  kubelet          Node multinode-814696-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 12s)  kubelet          Node multinode-814696-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                node-controller  Node multinode-814696-m02 event: Registered Node multinode-814696-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-814696-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007359] FS-Cache: O-key=[8] 'bea40f0200000000'
	[  +0.004926] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.006608] FS-Cache: N-cookie d=000000004c2712ea{9p.inode} n=0000000080630bc4
	[  +0.008736] FS-Cache: N-key=[8] 'bea40f0200000000'
	[  +2.843207] FS-Cache: Duplicate cookie detected
	[  +0.004724] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006740] FS-Cache: O-cookie d=00000000bb1401be{9P.session} n=00000000868856b7
	[  +0.007516] FS-Cache: O-key=[10] '34323937313531323330'
	[  +0.005345] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006654] FS-Cache: N-cookie d=00000000bb1401be{9P.session} n=0000000032e17c6a
	[  +0.008918] FS-Cache: N-key=[10] '34323937313531323330'
	[Aug 9 18:50] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +1.019509] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +2.019754] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +4.187595] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[ +16.130450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 18:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	
	* 
	* ==> etcd [0406b83f8c1f13552f0710a6eb4564571eef68a2c288a3b9f8dea85a4cf38207] <==
	* {"level":"info","ts":"2023-08-09T18:57:55.170Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-09T18:57:55.172Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T18:57:55.172Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-09T18:57:55.172Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-09T18:57:55.172Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T18:57:55.172Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-09T18:57:55.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-09T18:57:55.663Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:57:55.663Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-814696 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T18:57:55.663Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:57:55.663Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T18:57:55.664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T18:57:55.664Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T18:57:55.664Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:57:55.664Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:57:55.664Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T18:57:55.665Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T18:57:55.665Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-09T18:58:51.979Z","caller":"traceutil/trace.go:171","msg":"trace[1890866527] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"228.005459ms","start":"2023-08-09T18:58:51.751Z","end":"2023-08-09T18:58:51.979Z","steps":["trace[1890866527] 'process raft request'  (duration: 227.898941ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:59:12 up  2:41,  0 users,  load average: 0.79, 0.97, 1.73
	Linux multinode-814696 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [ee8fd3704b5857cb81db63ab8860eeee122ce4603dd959f940870f6e7e85eff7] <==
	* I0809 18:58:14.156335       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0809 18:58:14.156431       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0809 18:58:14.156614       1 main.go:116] setting mtu 1500 for CNI 
	I0809 18:58:14.156635       1 main.go:146] kindnetd IP family: "ipv4"
	I0809 18:58:14.156666       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0809 18:58:44.487872       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0809 18:58:44.495678       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0809 18:58:44.495707       1 main.go:227] handling current node
	I0809 18:58:54.501559       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0809 18:58:54.501584       1 main.go:227] handling current node
	I0809 18:59:04.514133       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0809 18:59:04.514156       1 main.go:227] handling current node
	I0809 18:59:04.514167       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0809 18:59:04.514172       1 main.go:250] Node multinode-814696-m02 has CIDR [10.244.1.0/24] 
	I0809 18:59:04.514334       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [f9ce1d33945a4a5f64f8b7193bbd66087605cc89e5f55ea9a262ae5ff752284b] <==
	* I0809 18:57:57.655944       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 18:57:57.656324       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0809 18:57:57.656632       1 aggregator.go:152] initial CRD sync complete...
	I0809 18:57:57.656719       1 autoregister_controller.go:141] Starting autoregister controller
	I0809 18:57:57.656758       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0809 18:57:57.656790       1 cache.go:39] Caches are synced for autoregister controller
	I0809 18:57:57.657149       1 controller.go:624] quota admission added evaluator for: namespaces
	E0809 18:57:57.664446       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0809 18:57:57.707867       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 18:57:58.220857       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 18:57:58.457199       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0809 18:57:58.460647       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0809 18:57:58.460663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0809 18:57:58.828450       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 18:57:58.861684       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0809 18:57:58.983603       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0809 18:57:58.990410       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0809 18:57:58.991355       1 controller.go:624] quota admission added evaluator for: endpoints
	I0809 18:57:58.995108       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0809 18:57:59.585077       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0809 18:58:00.511404       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0809 18:58:00.520322       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0809 18:58:00.530804       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0809 18:58:13.090668       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0809 18:58:13.240746       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c3bcae54127fb0f8a96375c707927f7fd0dbdc2ce2df89e9a649410dd4055947] <==
	* I0809 18:58:12.540632       1 shared_informer.go:318] Caches are synced for attach detach
	I0809 18:58:12.589029       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:58:12.645211       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 18:58:12.957586       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:58:13.034546       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 18:58:13.034598       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0809 18:58:13.094068       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0809 18:58:13.249340       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2tcmw"
	I0809 18:58:13.250821       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n72x8"
	I0809 18:58:13.397307       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0809 18:58:13.468441       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-lfx8p"
	I0809 18:58:13.475118       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zj6cv"
	I0809 18:58:13.556363       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-lfx8p"
	I0809 18:58:47.508017       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0809 18:59:01.973983       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-814696-m02\" does not exist"
	I0809 18:59:01.981865       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-814696-m02" podCIDRs=[10.244.1.0/24]
	I0809 18:59:01.983861       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nxp4p"
	I0809 18:59:01.984109       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fm85q"
	I0809 18:59:02.510916       1 event.go:307] "Event occurred" object="multinode-814696-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-814696-m02 event: Registered Node multinode-814696-m02 in Controller"
	I0809 18:59:02.510979       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-814696-m02"
	W0809 18:59:03.726212       1 topologycache.go:232] Can't get CPU or zone information for multinode-814696-m02 node
	I0809 18:59:06.269950       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0809 18:59:06.276101       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-jxlzc"
	I0809 18:59:06.281434       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-wvdrx"
	I0809 18:59:07.521211       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-jxlzc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-jxlzc"
	
	* 
	* ==> kube-proxy [04e93892b10138e3feca29856a9020497f35f2bda5a64c0f8b5ebe440b9dacb0] <==
	* I0809 18:58:14.263457       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0809 18:58:14.263529       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0809 18:58:14.263556       1 server_others.go:554] "Using iptables proxy"
	I0809 18:58:14.371054       1 server_others.go:192] "Using iptables Proxier"
	I0809 18:58:14.371109       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 18:58:14.371122       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 18:58:14.371158       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 18:58:14.371198       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 18:58:14.372038       1 server.go:658] "Version info" version="v1.27.4"
	I0809 18:58:14.372062       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 18:58:14.373026       1 config.go:188] "Starting service config controller"
	I0809 18:58:14.373110       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 18:58:14.373451       1 config.go:97] "Starting endpoint slice config controller"
	I0809 18:58:14.373477       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 18:58:14.374708       1 config.go:315] "Starting node config controller"
	I0809 18:58:14.374806       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 18:58:14.475252       1 shared_informer.go:318] Caches are synced for service config
	I0809 18:58:14.475408       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 18:58:14.476779       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2ee55013df0a937094d1d3af49181e1c999c9dca12575791e4d714366ef00a59] <==
	* W0809 18:57:57.665402       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 18:57:57.665414       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0809 18:57:57.665487       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:57:57.665554       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 18:57:57.665520       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 18:57:57.665581       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0809 18:57:57.665791       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:57:57.665812       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0809 18:57:57.666562       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 18:57:57.666589       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 18:57:58.493748       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0809 18:57:58.493789       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0809 18:57:58.539286       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 18:57:58.539323       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0809 18:57:58.572832       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 18:57:58.572868       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0809 18:57:58.629282       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 18:57:58.629323       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0809 18:57:58.644666       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 18:57:58.644699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 18:57:58.668598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0809 18:57:58.668635       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0809 18:57:58.748094       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 18:57:58.748122       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0809 18:58:01.462371       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271338    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef0f59b8-8f6f-4043-8edd-b34c75101580-lib-modules\") pod \"kindnet-n72x8\" (UID: \"ef0f59b8-8f6f-4043-8edd-b34c75101580\") " pod="kube-system/kindnet-n72x8"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271358    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxmm\" (UniqueName: \"kubernetes.io/projected/ef0f59b8-8f6f-4043-8edd-b34c75101580-kube-api-access-gjxmm\") pod \"kindnet-n72x8\" (UID: \"ef0f59b8-8f6f-4043-8edd-b34c75101580\") " pod="kube-system/kindnet-n72x8"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271478    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d86217ed-fcd4-4549-9c9c-36742860c3e6-kube-proxy\") pod \"kube-proxy-2tcmw\" (UID: \"d86217ed-fcd4-4549-9c9c-36742860c3e6\") " pod="kube-system/kube-proxy-2tcmw"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271524    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ef0f59b8-8f6f-4043-8edd-b34c75101580-cni-cfg\") pod \"kindnet-n72x8\" (UID: \"ef0f59b8-8f6f-4043-8edd-b34c75101580\") " pod="kube-system/kindnet-n72x8"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271580    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d86217ed-fcd4-4549-9c9c-36742860c3e6-xtables-lock\") pod \"kube-proxy-2tcmw\" (UID: \"d86217ed-fcd4-4549-9c9c-36742860c3e6\") " pod="kube-system/kube-proxy-2tcmw"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: I0809 18:58:13.271606    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d86217ed-fcd4-4549-9c9c-36742860c3e6-lib-modules\") pod \"kube-proxy-2tcmw\" (UID: \"d86217ed-fcd4-4549-9c9c-36742860c3e6\") " pod="kube-system/kube-proxy-2tcmw"
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: W0809 18:58:13.655383    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio-be281f0e4c4f5d7e9e6e5f9ec5a5cc472bb72aacf82c8ae9a0b8f21bfffb1af8 WatchSource:0}: Error finding container be281f0e4c4f5d7e9e6e5f9ec5a5cc472bb72aacf82c8ae9a0b8f21bfffb1af8: Status 404 returned error can't find the container with id be281f0e4c4f5d7e9e6e5f9ec5a5cc472bb72aacf82c8ae9a0b8f21bfffb1af8
	Aug 09 18:58:13 multinode-814696 kubelet[1589]: W0809 18:58:13.656387    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio-4d955d42ac47e281a0f8f3d4f37739f5e837cb30ac698fbe1d48d81da119253b WatchSource:0}: Error finding container 4d955d42ac47e281a0f8f3d4f37739f5e837cb30ac698fbe1d48d81da119253b: Status 404 returned error can't find the container with id 4d955d42ac47e281a0f8f3d4f37739f5e837cb30ac698fbe1d48d81da119253b
	Aug 09 18:58:14 multinode-814696 kubelet[1589]: I0809 18:58:14.774714    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n72x8" podStartSLOduration=1.774668471 podCreationTimestamp="2023-08-09 18:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:58:14.774597027 +0000 UTC m=+14.287849749" watchObservedRunningTime="2023-08-09 18:58:14.774668471 +0000 UTC m=+14.287921195"
	Aug 09 18:58:14 multinode-814696 kubelet[1589]: I0809 18:58:14.784168    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2tcmw" podStartSLOduration=1.784130363 podCreationTimestamp="2023-08-09 18:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:58:14.784113884 +0000 UTC m=+14.297366605" watchObservedRunningTime="2023-08-09 18:58:14.784130363 +0000 UTC m=+14.297383078"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.639626    1589 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.661432    1589 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.662776    1589 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.706306    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8fe3ded6-9715-4d97-8107-25b1ae2c1949-tmp\") pod \"storage-provisioner\" (UID: \"8fe3ded6-9715-4d97-8107-25b1ae2c1949\") " pod="kube-system/storage-provisioner"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.706368    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrz6\" (UniqueName: \"kubernetes.io/projected/6a7f440d-1020-4de5-9a75-42a2357a6e79-kube-api-access-lcrz6\") pod \"coredns-5d78c9869d-zj6cv\" (UID: \"6a7f440d-1020-4de5-9a75-42a2357a6e79\") " pod="kube-system/coredns-5d78c9869d-zj6cv"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.706390    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rw5\" (UniqueName: \"kubernetes.io/projected/8fe3ded6-9715-4d97-8107-25b1ae2c1949-kube-api-access-j8rw5\") pod \"storage-provisioner\" (UID: \"8fe3ded6-9715-4d97-8107-25b1ae2c1949\") " pod="kube-system/storage-provisioner"
	Aug 09 18:58:44 multinode-814696 kubelet[1589]: I0809 18:58:44.706571    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a7f440d-1020-4de5-9a75-42a2357a6e79-config-volume\") pod \"coredns-5d78c9869d-zj6cv\" (UID: \"6a7f440d-1020-4de5-9a75-42a2357a6e79\") " pod="kube-system/coredns-5d78c9869d-zj6cv"
	Aug 09 18:58:45 multinode-814696 kubelet[1589]: W0809 18:58:45.004649    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio-d5945488410872a1b9dd6125a5783deae06a4d48f4b16ab5556705e6753ac66c WatchSource:0}: Error finding container d5945488410872a1b9dd6125a5783deae06a4d48f4b16ab5556705e6753ac66c: Status 404 returned error can't find the container with id d5945488410872a1b9dd6125a5783deae06a4d48f4b16ab5556705e6753ac66c
	Aug 09 18:58:45 multinode-814696 kubelet[1589]: W0809 18:58:45.004919    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio-93d79633d02163e5b235fdd60953be450a5d0901d3e7576fd3e8fd841a07252e WatchSource:0}: Error finding container 93d79633d02163e5b235fdd60953be450a5d0901d3e7576fd3e8fd841a07252e: Status 404 returned error can't find the container with id 93d79633d02163e5b235fdd60953be450a5d0901d3e7576fd3e8fd841a07252e
	Aug 09 18:58:45 multinode-814696 kubelet[1589]: I0809 18:58:45.832224    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.832187283 podCreationTimestamp="2023-08-09 18:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:58:45.83179995 +0000 UTC m=+45.345052672" watchObservedRunningTime="2023-08-09 18:58:45.832187283 +0000 UTC m=+45.345440060"
	Aug 09 18:58:45 multinode-814696 kubelet[1589]: I0809 18:58:45.841334    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-zj6cv" podStartSLOduration=32.841289708 podCreationTimestamp="2023-08-09 18:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-09 18:58:45.841269905 +0000 UTC m=+45.354522626" watchObservedRunningTime="2023-08-09 18:58:45.841289708 +0000 UTC m=+45.354542429"
	Aug 09 18:59:06 multinode-814696 kubelet[1589]: I0809 18:59:06.288397    1589 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 18:59:06 multinode-814696 kubelet[1589]: I0809 18:59:06.329865    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfr6q\" (UniqueName: \"kubernetes.io/projected/15a75c58-d029-44d9-bc0e-f3d5976471ef-kube-api-access-zfr6q\") pod \"busybox-67b7f59bb-wvdrx\" (UID: \"15a75c58-d029-44d9-bc0e-f3d5976471ef\") " pod="default/busybox-67b7f59bb-wvdrx"
	Aug 09 18:59:06 multinode-814696 kubelet[1589]: W0809 18:59:06.632499    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio-3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08 WatchSource:0}: Error finding container 3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08: Status 404 returned error can't find the container with id 3f58f765453952b35458bba9afb682228a750851846599c20d58bf5a04558a08
	Aug 09 18:59:07 multinode-814696 kubelet[1589]: I0809 18:59:07.868687    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-wvdrx" podStartSLOduration=1.01338712 podCreationTimestamp="2023-08-09 18:59:06 +0000 UTC" firstStartedPulling="2023-08-09 18:59:06.636235629 +0000 UTC m=+66.149488338" lastFinishedPulling="2023-08-09 18:59:07.491481083 +0000 UTC m=+67.004733793" observedRunningTime="2023-08-09 18:59:07.868277329 +0000 UTC m=+67.381530049" watchObservedRunningTime="2023-08-09 18:59:07.868632575 +0000 UTC m=+67.381885297"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-814696 -n multinode-814696
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-814696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.2818498088.exe start -p running-upgrade-142506 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.2818498088.exe start -p running-upgrade-142506 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m8.118636253s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-142506 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-142506 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.37359492s)

                                                
                                                
-- stdout --
	* [running-upgrade-142506] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-142506 in cluster running-upgrade-142506
	* Pulling base image ...
	* Updating the running docker "running-upgrade-142506" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:11:06.457687  997679 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:11:06.458104  997679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:11:06.458142  997679 out.go:309] Setting ErrFile to fd 2...
	I0809 19:11:06.458159  997679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:11:06.458503  997679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:11:06.459154  997679 out.go:303] Setting JSON to false
	I0809 19:11:06.460688  997679 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10422,"bootTime":1691597845,"procs":604,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:11:06.460788  997679 start.go:138] virtualization: kvm guest
	I0809 19:11:06.463912  997679 out.go:177] * [running-upgrade-142506] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:11:06.465564  997679 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:11:06.465614  997679 notify.go:220] Checking for updates...
	I0809 19:11:06.467405  997679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:11:06.469494  997679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:11:06.471113  997679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:11:06.472732  997679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:11:06.474422  997679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:11:06.476165  997679 config.go:182] Loaded profile config "running-upgrade-142506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0809 19:11:06.476196  997679 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 19:11:06.478059  997679 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0809 19:11:06.479469  997679 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:11:06.509921  997679 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:11:06.510032  997679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:11:06.568026  997679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:84 SystemTime:2023-08-09 19:11:06.558456082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:11:06.568162  997679 docker.go:294] overlay module found
	I0809 19:11:06.569961  997679 out.go:177] * Using the docker driver based on existing profile
	I0809 19:11:06.571248  997679 start.go:298] selected driver: docker
	I0809 19:11:06.571263  997679 start.go:901] validating driver "docker" against &{Name:running-upgrade-142506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-142506 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:11:06.571383  997679 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:11:06.572623  997679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:11:06.638593  997679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:84 SystemTime:2023-08-09 19:11:06.627817135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:11:06.638894  997679 cni.go:84] Creating CNI manager for ""
	I0809 19:11:06.638922  997679 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0809 19:11:06.638931  997679 start_flags.go:319] config:
	{Name:running-upgrade-142506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-142506 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:11:06.641473  997679 out.go:177] * Starting control plane node running-upgrade-142506 in cluster running-upgrade-142506
	I0809 19:11:06.644853  997679 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 19:11:06.647132  997679 out.go:177] * Pulling base image ...
	I0809 19:11:06.649353  997679 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0809 19:11:06.649393  997679 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 19:11:06.670878  997679 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 19:11:06.670904  997679 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	W0809 19:11:06.673565  997679 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0809 19:11:06.673750  997679 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/running-upgrade-142506/config.json ...
	I0809 19:11:06.673835  997679 cache.go:107] acquiring lock: {Name:mkd9197103bec7790558728dc8d8d7d6bb431333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.673861  997679 cache.go:107] acquiring lock: {Name:mkdb3a93da45e4059fdb8bba5c77cdaf9850cc33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.673871  997679 cache.go:107] acquiring lock: {Name:mk83da2da8733cc768e72247513a10892100361f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.673899  997679 cache.go:107] acquiring lock: {Name:mk1f1410c68a9608ea997d476b63cd5e5e556883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.673938  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0809 19:11:06.673958  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0809 19:11:06.673964  997679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 132.733µs
	I0809 19:11:06.673992  997679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0809 19:11:06.673999  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0809 19:11:06.673974  997679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 127.714µs
	I0809 19:11:06.673989  997679 cache.go:107] acquiring lock: {Name:mkd9a64ef04fb71f8da80b92b3b81702b6da89f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.674011  997679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0809 19:11:06.674010  997679 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 144.408µs
	I0809 19:11:06.674028  997679 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0809 19:11:06.674001  997679 cache.go:107] acquiring lock: {Name:mk0cda497045cdcd766b48bb35a072fe17459cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.674004  997679 cache.go:107] acquiring lock: {Name:mk14565f17418281ac827d2b965d38fd1f199283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.674080  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0809 19:11:06.674211  997679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 284.07µs
	I0809 19:11:06.674228  997679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0809 19:11:06.674068  997679 cache.go:107] acquiring lock: {Name:mk23c78e3d10b406d28407d10c8afb59371fe70a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.674242  997679 cache.go:195] Successfully downloaded all kic artifacts
	I0809 19:11:06.674259  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0809 19:11:06.674270  997679 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 252.035µs
	I0809 19:11:06.674282  997679 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0809 19:11:06.674140  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0809 19:11:06.674291  997679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 297.711µs
	I0809 19:11:06.674290  997679 start.go:365] acquiring machines lock for running-upgrade-142506: {Name:mk64f31ed8e4268ed6a7d6f3a404d54b3d90991f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:06.674299  997679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0809 19:11:06.674194  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0809 19:11:06.674363  997679 start.go:369] acquired machines lock for "running-upgrade-142506" in 56.04µs
	I0809 19:11:06.674164  997679 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0809 19:11:06.674380  997679 start.go:96] Skipping create...Using existing machine configuration
	I0809 19:11:06.674406  997679 fix.go:54] fixHost starting: m01
	I0809 19:11:06.674376  997679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 507.195µs
	I0809 19:11:06.674502  997679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0809 19:11:06.674392  997679 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 459.987µs
	I0809 19:11:06.674511  997679 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0809 19:11:06.674542  997679 cache.go:87] Successfully saved all images to host disk.
	I0809 19:11:06.674670  997679 cli_runner.go:164] Run: docker container inspect running-upgrade-142506 --format={{.State.Status}}
	I0809 19:11:06.696230  997679 fix.go:102] recreateIfNeeded on running-upgrade-142506: state=Running err=<nil>
	W0809 19:11:06.696261  997679 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 19:11:06.698266  997679 out.go:177] * Updating the running docker "running-upgrade-142506" container ...
	I0809 19:11:06.699757  997679 machine.go:88] provisioning docker machine ...
	I0809 19:11:06.699786  997679 ubuntu.go:169] provisioning hostname "running-upgrade-142506"
	I0809 19:11:06.699845  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:06.722367  997679 main.go:141] libmachine: Using SSH client type: native
	I0809 19:11:06.722995  997679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33593 <nil> <nil>}
	I0809 19:11:06.723021  997679 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-142506 && echo "running-upgrade-142506" | sudo tee /etc/hostname
	I0809 19:11:06.882268  997679 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-142506
	
	I0809 19:11:06.882366  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:06.907905  997679 main.go:141] libmachine: Using SSH client type: native
	I0809 19:11:06.908546  997679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33593 <nil> <nil>}
	I0809 19:11:06.908576  997679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-142506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-142506/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-142506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 19:11:07.024043  997679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 19:11:07.024075  997679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 19:11:07.024105  997679 ubuntu.go:177] setting up certificates
	I0809 19:11:07.024114  997679 provision.go:83] configureAuth start
	I0809 19:11:07.024190  997679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-142506
	I0809 19:11:07.044375  997679 provision.go:138] copyHostCerts
	I0809 19:11:07.044464  997679 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 19:11:07.044487  997679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 19:11:07.044565  997679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 19:11:07.044692  997679 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 19:11:07.044710  997679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 19:11:07.044747  997679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 19:11:07.044826  997679 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 19:11:07.044838  997679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 19:11:07.044868  997679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 19:11:07.044938  997679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-142506 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-142506]
	I0809 19:11:07.201377  997679 provision.go:172] copyRemoteCerts
	I0809 19:11:07.201463  997679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 19:11:07.201517  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:07.221127  997679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33593 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/running-upgrade-142506/id_rsa Username:docker}
	I0809 19:11:07.316960  997679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 19:11:07.350931  997679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0809 19:11:07.372949  997679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0809 19:11:07.396841  997679 provision.go:86] duration metric: configureAuth took 372.712311ms
	I0809 19:11:07.396870  997679 ubuntu.go:193] setting minikube options for container-runtime
	I0809 19:11:07.397033  997679 config.go:182] Loaded profile config "running-upgrade-142506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0809 19:11:07.397126  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:07.424397  997679 main.go:141] libmachine: Using SSH client type: native
	I0809 19:11:07.424794  997679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33593 <nil> <nil>}
	I0809 19:11:07.424810  997679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 19:11:07.947581  997679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 19:11:07.947611  997679 machine.go:91] provisioned docker machine in 1.24783542s
	I0809 19:11:07.947624  997679 start.go:300] post-start starting for "running-upgrade-142506" (driver="docker")
	I0809 19:11:07.947649  997679 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 19:11:07.947719  997679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 19:11:07.947771  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:07.964831  997679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33593 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/running-upgrade-142506/id_rsa Username:docker}
	I0809 19:11:08.051189  997679 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 19:11:08.054353  997679 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 19:11:08.054391  997679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 19:11:08.054407  997679 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 19:11:08.054416  997679 info.go:137] Remote host: Ubuntu 19.10
	I0809 19:11:08.054431  997679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 19:11:08.054496  997679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 19:11:08.054639  997679 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 19:11:08.054777  997679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 19:11:08.062869  997679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 19:11:08.082800  997679 start.go:303] post-start completed in 135.150318ms
	I0809 19:11:08.082879  997679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:11:08.082927  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:08.099958  997679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33593 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/running-upgrade-142506/id_rsa Username:docker}
	I0809 19:11:08.176134  997679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 19:11:08.180091  997679 fix.go:56] fixHost completed within 1.505678371s
	I0809 19:11:08.180114  997679 start.go:83] releasing machines lock for "running-upgrade-142506", held for 1.505741184s
	I0809 19:11:08.180190  997679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-142506
	I0809 19:11:08.198355  997679 ssh_runner.go:195] Run: cat /version.json
	I0809 19:11:08.198412  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:08.198471  997679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 19:11:08.198539  997679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-142506
	I0809 19:11:08.215921  997679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33593 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/running-upgrade-142506/id_rsa Username:docker}
	I0809 19:11:08.217048  997679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33593 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/running-upgrade-142506/id_rsa Username:docker}
	W0809 19:11:08.290759  997679 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0809 19:11:08.290847  997679 ssh_runner.go:195] Run: systemctl --version
	I0809 19:11:08.330776  997679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 19:11:08.382802  997679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 19:11:08.386998  997679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:11:08.401828  997679 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 19:11:08.401925  997679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:11:08.425308  997679 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0809 19:11:08.425332  997679 start.go:466] detecting cgroup driver to use...
	I0809 19:11:08.425364  997679 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 19:11:08.425415  997679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 19:11:08.445939  997679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 19:11:08.455236  997679 docker.go:196] disabling cri-docker service (if available) ...
	I0809 19:11:08.455288  997679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 19:11:08.464098  997679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 19:11:08.473045  997679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0809 19:11:08.481978  997679 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0809 19:11:08.482051  997679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 19:11:08.561278  997679 docker.go:212] disabling docker service ...
	I0809 19:11:08.561337  997679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 19:11:08.570864  997679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 19:11:08.579902  997679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 19:11:08.660065  997679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 19:11:08.741342  997679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 19:11:08.751331  997679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 19:11:08.765605  997679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0809 19:11:08.765685  997679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:11:08.777175  997679 out.go:177] 
	W0809 19:11:08.779055  997679 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0809 19:11:08.779088  997679 out.go:239] * 
	* 
	W0809 19:11:08.780353  997679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 19:11:08.782026  997679 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-142506 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-09 19:11:08.801462684 +0000 UTC m=+1923.968567815
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-142506
helpers_test.go:235: (dbg) docker inspect running-upgrade-142506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1",
	        "Created": "2023-08-09T19:09:58.649167889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 982229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T19:09:59.922729195Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1/hostname",
	        "HostsPath": "/var/lib/docker/containers/eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1/hosts",
	        "LogPath": "/var/lib/docker/containers/eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1/eb551aca526d6d3c8dcfc6c759c079ad457c9bcee5654d4a399f3fc9cab34ac1-json.log",
	        "Name": "/running-upgrade-142506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-142506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89875cd6018abd984e5b8d344ffff07d15ab5ee4aeba36735bba5be90dd5d5fa-init/diff:/var/lib/docker/overlay2/123d0aed2358c1d090f6436ceb1b70b7549af38c6d8c196ec9be661056215f48/diff:/var/lib/docker/overlay2/125b6339b990269036f27e9dba5f80284fea40cb60ea128a9aaece197685640c/diff:/var/lib/docker/overlay2/4d0995eff8a4ad60bbc53ceeaf6325ce85c2a7dad22a98823fb967273c2ffabf/diff:/var/lib/docker/overlay2/5c1d8550201432a0af22ac05a2b73adfa2404ae0a9337873de7483674252d2d3/diff:/var/lib/docker/overlay2/74c9cdeec2c015bece29ac6c65c0704f7964d52d4deddd15f82a77fdf64e8807/diff:/var/lib/docker/overlay2/5884ad364e1862588219fec3eb02a129427c324a0e5104d323dfe6591f132f9b/diff:/var/lib/docker/overlay2/fcea20a00c9a68e365dc09d54a23c2b035ba442af6b8c78c4651627fdbadaa00/diff:/var/lib/docker/overlay2/bc4f0e64fbb5d24b8ede840825c9287dfe78b1945cfb634df61cc8a5b43822b2/diff:/var/lib/docker/overlay2/50acaa07c44bdf9594944481a392f6e0c446c4a079defcc2788c27fa31a056c6/diff:/var/lib/docker/overlay2/ebc89d
fb3c0a7c37b58c2984ff24187f9f000a74ddeb716fec5f9d758c6984de/diff:/var/lib/docker/overlay2/29bf4f8edfc2101fff2b8b88824580f3b1e725576c60edc151e118d82382fbfd/diff:/var/lib/docker/overlay2/79435977931a2e79d79312ec791a389989b703f1962b97c498e6b96ccbe6c14d/diff:/var/lib/docker/overlay2/c2a860b020a5090667df5d6e26f2366eb6558ba5d599d52b60adbf499092e692/diff:/var/lib/docker/overlay2/680d44ccdd4da84b1605caa043e8bf2b2c98fc33279a9a2255109ef2d8903937/diff:/var/lib/docker/overlay2/dabbfe31efed3d356a1ff64e9a73b74a8ef413a28a6c75291fc7746be3017825/diff:/var/lib/docker/overlay2/6ae4411f9a5d08519bf64a8bcb4b583785922a42a2fe39d44239a1aff0ca64e9/diff:/var/lib/docker/overlay2/660a7632a827b63ef9e10c4eb54fc87bfc2e512f1016fcdac6258d7d464c40d0/diff:/var/lib/docker/overlay2/7aa1fecdca149bc726f7e884174693e4a0506ececa1d7e6805bf65c2f96adb88/diff:/var/lib/docker/overlay2/13c88136c09348f7f9b0aa1ea6e59d8ebe8464651d4a5bc31d7d6964a3c92134/diff:/var/lib/docker/overlay2/3bed1b5ae25d372fe5414df7d08303a1bba2cf8c1e85b5c5c9f28b777c11461c/diff:/var/lib/d
ocker/overlay2/cdaa56d01dd23405ea454af85a1b613cbcd0310850e01930a30102256f6854b8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89875cd6018abd984e5b8d344ffff07d15ab5ee4aeba36735bba5be90dd5d5fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89875cd6018abd984e5b8d344ffff07d15ab5ee4aeba36735bba5be90dd5d5fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89875cd6018abd984e5b8d344ffff07d15ab5ee4aeba36735bba5be90dd5d5fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-142506",
	                "Source": "/var/lib/docker/volumes/running-upgrade-142506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-142506",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-142506",
	                "name.minikube.sigs.k8s.io": "running-upgrade-142506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb54f737e60945646e83457670dfd51093661e0ad3ee68ef6afb2f0a209f7a8c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33593"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33592"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33591"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fb54f737e609",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "bee162e4ef2fadd7f1aa1d2ad6b55eb13a39f2a1848ba1112a91fb0f578da8d2",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "fe60007d0116d5b5f84b3355637c856f39a603a95d7a38740292b9e625202332",
	                    "EndpointID": "bee162e4ef2fadd7f1aa1d2ad6b55eb13a39f2a1848ba1112a91fb0f578da8d2",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-142506 -n running-upgrade-142506
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-142506 -n running-upgrade-142506: exit status 4 (315.25361ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0809 19:11:09.107491  999046 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-142506" does not appear in /home/jenkins/minikube-integration/17011-816603/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-142506" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-142506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-142506
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-142506: (1.977640403s)
--- FAIL: TestRunningBinaryUpgrade (73.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3889654341.exe start -p stopped-upgrade-321125 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.3889654341.exe start -p stopped-upgrade-321125 --memory=2200 --vm-driver=docker  --container-runtime=crio: (57.613993508s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.3889654341.exe -p stopped-upgrade-321125 stop
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-321125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-321125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.4857543s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-321125] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-321125 in cluster stopped-upgrade-321125
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-321125" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:11:56.505438 1006914 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:11:56.505569 1006914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:11:56.505580 1006914 out.go:309] Setting ErrFile to fd 2...
	I0809 19:11:56.505584 1006914 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:11:56.505796 1006914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:11:56.506443 1006914 out.go:303] Setting JSON to false
	I0809 19:11:56.507938 1006914 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10472,"bootTime":1691597845,"procs":686,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:11:56.508001 1006914 start.go:138] virtualization: kvm guest
	I0809 19:11:56.510513 1006914 out.go:177] * [stopped-upgrade-321125] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:11:56.512323 1006914 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:11:56.512406 1006914 notify.go:220] Checking for updates...
	I0809 19:11:56.515416 1006914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:11:56.516976 1006914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:11:56.518532 1006914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:11:56.520097 1006914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:11:56.521603 1006914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:11:56.524398 1006914 config.go:182] Loaded profile config "stopped-upgrade-321125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0809 19:11:56.524476 1006914 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 19:11:56.526751 1006914 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0809 19:11:56.528362 1006914 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:11:56.550157 1006914 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:11:56.550283 1006914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:11:56.605207 1006914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:11:56.596365714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:11:56.605340 1006914 docker.go:294] overlay module found
	I0809 19:11:56.607299 1006914 out.go:177] * Using the docker driver based on existing profile
	I0809 19:11:56.608844 1006914 start.go:298] selected driver: docker
	I0809 19:11:56.608862 1006914 start.go:901] validating driver "docker" against &{Name:stopped-upgrade-321125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-321125 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:11:56.608953 1006914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:11:56.609916 1006914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:11:56.666524 1006914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:11:56.657131891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:11:56.666829 1006914 cni.go:84] Creating CNI manager for ""
	I0809 19:11:56.666851 1006914 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0809 19:11:56.666860 1006914 start_flags.go:319] config:
	{Name:stopped-upgrade-321125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-321125 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:11:56.669773 1006914 out.go:177] * Starting control plane node stopped-upgrade-321125 in cluster stopped-upgrade-321125
	I0809 19:11:56.671098 1006914 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 19:11:56.672590 1006914 out.go:177] * Pulling base image ...
	I0809 19:11:56.673961 1006914 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0809 19:11:56.674073 1006914 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 19:11:56.690422 1006914 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 19:11:56.690450 1006914 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	W0809 19:11:56.706949 1006914 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0809 19:11:56.707124 1006914 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/stopped-upgrade-321125/config.json ...
	I0809 19:11:56.707263 1006914 cache.go:107] acquiring lock: {Name:mk14565f17418281ac827d2b965d38fd1f199283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707243 1006914 cache.go:107] acquiring lock: {Name:mk83da2da8733cc768e72247513a10892100361f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707342 1006914 cache.go:107] acquiring lock: {Name:mkd9a64ef04fb71f8da80b92b3b81702b6da89f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707261 1006914 cache.go:107] acquiring lock: {Name:mkd9197103bec7790558728dc8d8d7d6bb431333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707413 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0809 19:11:56.707409 1006914 cache.go:195] Successfully downloaded all kic artifacts
	I0809 19:11:56.707437 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0809 19:11:56.707435 1006914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 195.79µs
	I0809 19:11:56.707448 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0809 19:11:56.707449 1006914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0809 19:11:56.707414 1006914 cache.go:107] acquiring lock: {Name:mk1f1410c68a9608ea997d476b63cd5e5e556883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707451 1006914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 200.574µs
	I0809 19:11:56.707464 1006914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0809 19:11:56.707467 1006914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 117.961µs
	I0809 19:11:56.707464 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0809 19:11:56.707476 1006914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0809 19:11:56.707483 1006914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 256.307µs
	I0809 19:11:56.707499 1006914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0809 19:11:56.707463 1006914 cache.go:107] acquiring lock: {Name:mk23c78e3d10b406d28407d10c8afb59371fe70a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707474 1006914 cache.go:107] acquiring lock: {Name:mk0cda497045cdcd766b48bb35a072fe17459cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707499 1006914 start.go:365] acquiring machines lock for stopped-upgrade-321125: {Name:mk73b1d2b59869c108bf9b6d3835362695321439 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707585 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0809 19:11:56.707599 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0809 19:11:56.707614 1006914 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 244.573µs
	I0809 19:11:56.707666 1006914 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0809 19:11:56.707586 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0809 19:11:56.707669 1006914 start.go:369] acquired machines lock for "stopped-upgrade-321125" in 88.886µs
	I0809 19:11:56.707678 1006914 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 241.181µs
	I0809 19:11:56.707688 1006914 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0809 19:11:56.707598 1006914 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 175.583µs
	I0809 19:11:56.707690 1006914 start.go:96] Skipping create...Using existing machine configuration
	I0809 19:11:56.707696 1006914 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0809 19:11:56.707700 1006914 fix.go:54] fixHost starting: m01
	I0809 19:11:56.707384 1006914 cache.go:107] acquiring lock: {Name:mkdb3a93da45e4059fdb8bba5c77cdaf9850cc33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:11:56.707821 1006914 cache.go:115] /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0809 19:11:56.707832 1006914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 501.8µs
	I0809 19:11:56.707852 1006914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0809 19:11:56.707866 1006914 cache.go:87] Successfully saved all images to host disk.
	I0809 19:11:56.707981 1006914 cli_runner.go:164] Run: docker container inspect stopped-upgrade-321125 --format={{.State.Status}}
	I0809 19:11:56.724716 1006914 fix.go:102] recreateIfNeeded on stopped-upgrade-321125: state=Stopped err=<nil>
	W0809 19:11:56.724742 1006914 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 19:11:56.727011 1006914 out.go:177] * Restarting existing docker container for "stopped-upgrade-321125" ...
	I0809 19:11:56.728681 1006914 cli_runner.go:164] Run: docker start stopped-upgrade-321125
	I0809 19:11:56.997462 1006914 cli_runner.go:164] Run: docker container inspect stopped-upgrade-321125 --format={{.State.Status}}
	I0809 19:11:57.015375 1006914 kic.go:426] container "stopped-upgrade-321125" state is running.
	I0809 19:11:57.015846 1006914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-321125
	I0809 19:11:57.032208 1006914 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/stopped-upgrade-321125/config.json ...
	I0809 19:11:57.032447 1006914 machine.go:88] provisioning docker machine ...
	I0809 19:11:57.032470 1006914 ubuntu.go:169] provisioning hostname "stopped-upgrade-321125"
	I0809 19:11:57.032526 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:11:57.049219 1006914 main.go:141] libmachine: Using SSH client type: native
	I0809 19:11:57.049689 1006914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33619 <nil> <nil>}
	I0809 19:11:57.049707 1006914 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-321125 && echo "stopped-upgrade-321125" | sudo tee /etc/hostname
	I0809 19:11:57.050274 1006914 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43964->127.0.0.1:33619: read: connection reset by peer
	I0809 19:12:00.168084 1006914 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-321125
	
	I0809 19:12:00.168189 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:00.184986 1006914 main.go:141] libmachine: Using SSH client type: native
	I0809 19:12:00.185387 1006914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33619 <nil> <nil>}
	I0809 19:12:00.185404 1006914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-321125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-321125/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-321125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 19:12:00.300241 1006914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 19:12:00.300277 1006914 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 19:12:00.300305 1006914 ubuntu.go:177] setting up certificates
	I0809 19:12:00.300317 1006914 provision.go:83] configureAuth start
	I0809 19:12:00.300378 1006914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-321125
	I0809 19:12:00.317100 1006914 provision.go:138] copyHostCerts
	I0809 19:12:00.317163 1006914 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 19:12:00.317186 1006914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 19:12:00.317252 1006914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 19:12:00.317350 1006914 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 19:12:00.317358 1006914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 19:12:00.317380 1006914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 19:12:00.317439 1006914 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 19:12:00.317446 1006914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 19:12:00.317467 1006914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 19:12:00.317520 1006914 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-321125 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-321125]
	I0809 19:12:00.452355 1006914 provision.go:172] copyRemoteCerts
	I0809 19:12:00.452417 1006914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 19:12:00.452457 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:00.469014 1006914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33619 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/stopped-upgrade-321125/id_rsa Username:docker}
	I0809 19:12:00.555079 1006914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 19:12:00.571880 1006914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0809 19:12:00.588902 1006914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 19:12:00.605897 1006914 provision.go:86] duration metric: configureAuth took 305.563674ms
	I0809 19:12:00.605924 1006914 ubuntu.go:193] setting minikube options for container-runtime
	I0809 19:12:00.606133 1006914 config.go:182] Loaded profile config "stopped-upgrade-321125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0809 19:12:00.606275 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:00.623108 1006914 main.go:141] libmachine: Using SSH client type: native
	I0809 19:12:00.623518 1006914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33619 <nil> <nil>}
	I0809 19:12:00.623536 1006914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 19:12:01.159713 1006914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 19:12:01.159757 1006914 machine.go:91] provisioned docker machine in 4.127286698s
	I0809 19:12:01.159771 1006914 start.go:300] post-start starting for "stopped-upgrade-321125" (driver="docker")
	I0809 19:12:01.159785 1006914 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 19:12:01.159853 1006914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 19:12:01.159902 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:01.177427 1006914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33619 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/stopped-upgrade-321125/id_rsa Username:docker}
	I0809 19:12:01.259335 1006914 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 19:12:01.262010 1006914 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 19:12:01.262035 1006914 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 19:12:01.262050 1006914 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 19:12:01.262056 1006914 info.go:137] Remote host: Ubuntu 19.10
	I0809 19:12:01.262068 1006914 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 19:12:01.262130 1006914 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 19:12:01.262208 1006914 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 19:12:01.262321 1006914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 19:12:01.268698 1006914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 19:12:01.285762 1006914 start.go:303] post-start completed in 125.974194ms
	I0809 19:12:01.285849 1006914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:12:01.285904 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:01.302869 1006914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33619 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/stopped-upgrade-321125/id_rsa Username:docker}
	I0809 19:12:01.379988 1006914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 19:12:01.383782 1006914 fix.go:56] fixHost completed within 4.676072566s
	I0809 19:12:01.383808 1006914 start.go:83] releasing machines lock for "stopped-upgrade-321125", held for 4.676126203s
	I0809 19:12:01.383891 1006914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-321125
	I0809 19:12:01.402255 1006914 ssh_runner.go:195] Run: cat /version.json
	I0809 19:12:01.402319 1006914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 19:12:01.402388 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:01.402322 1006914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-321125
	I0809 19:12:01.421788 1006914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33619 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/stopped-upgrade-321125/id_rsa Username:docker}
	I0809 19:12:01.423616 1006914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33619 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/stopped-upgrade-321125/id_rsa Username:docker}
	W0809 19:12:01.498755 1006914 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0809 19:12:01.498833 1006914 ssh_runner.go:195] Run: systemctl --version
	I0809 19:12:01.545744 1006914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 19:12:01.594826 1006914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 19:12:01.599170 1006914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:12:01.615835 1006914 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 19:12:01.615916 1006914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:12:01.637901 1006914 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0809 19:12:01.637926 1006914 start.go:466] detecting cgroup driver to use...
	I0809 19:12:01.637957 1006914 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 19:12:01.637996 1006914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 19:12:01.659087 1006914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 19:12:01.668515 1006914 docker.go:196] disabling cri-docker service (if available) ...
	I0809 19:12:01.668577 1006914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 19:12:01.677732 1006914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 19:12:01.687016 1006914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0809 19:12:01.696855 1006914 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0809 19:12:01.696927 1006914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 19:12:01.761063 1006914 docker.go:212] disabling docker service ...
	I0809 19:12:01.761126 1006914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 19:12:01.770522 1006914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 19:12:01.779351 1006914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 19:12:01.842297 1006914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 19:12:01.909221 1006914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 19:12:01.918273 1006914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 19:12:01.931771 1006914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0809 19:12:01.931834 1006914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:12:01.941744 1006914 out.go:177] 
	W0809 19:12:01.943350 1006914 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0809 19:12:01.943380 1006914 out.go:239] * 
	* 
	W0809 19:12:01.944349 1006914 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0809 19:12:01.946020 1006914 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-321125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (64.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-734678 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-734678 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.99569828s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-734678] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-734678 in cluster pause-734678
	* Pulling base image ...
	* Updating the running docker "pause-734678" container ...
	* Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-734678" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:12:28.827530 1011483 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:12:28.827720 1011483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:12:28.827730 1011483 out.go:309] Setting ErrFile to fd 2...
	I0809 19:12:28.827735 1011483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:12:28.827970 1011483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:12:28.828653 1011483 out.go:303] Setting JSON to false
	I0809 19:12:28.830409 1011483 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10504,"bootTime":1691597845,"procs":705,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:12:28.830488 1011483 start.go:138] virtualization: kvm guest
	I0809 19:12:28.832572 1011483 out.go:177] * [pause-734678] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:12:28.834115 1011483 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:12:28.834214 1011483 notify.go:220] Checking for updates...
	I0809 19:12:28.835435 1011483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:12:28.836831 1011483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:12:28.838045 1011483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:12:28.839192 1011483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:12:28.841203 1011483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:12:28.842785 1011483 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:12:28.843216 1011483 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:12:28.865884 1011483 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:12:28.866041 1011483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:12:28.921999 1011483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:76 SystemTime:2023-08-09 19:12:28.912688079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:12:28.922117 1011483 docker.go:294] overlay module found
	I0809 19:12:28.923905 1011483 out.go:177] * Using the docker driver based on existing profile
	I0809 19:12:28.925458 1011483 start.go:298] selected driver: docker
	I0809 19:12:28.925474 1011483 start.go:901] validating driver "docker" against &{Name:pause-734678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-734678 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:12:28.925612 1011483 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:12:28.925718 1011483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:12:28.982071 1011483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:76 SystemTime:2023-08-09 19:12:28.973432295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:12:28.982788 1011483 cni.go:84] Creating CNI manager for ""
	I0809 19:12:28.982808 1011483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 19:12:28.982823 1011483 start_flags.go:319] config:
	{Name:pause-734678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-734678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:12:28.984450 1011483 out.go:177] * Starting control plane node pause-734678 in cluster pause-734678
	I0809 19:12:28.985642 1011483 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 19:12:28.986732 1011483 out.go:177] * Pulling base image ...
	I0809 19:12:28.987780 1011483 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:12:28.987806 1011483 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 19:12:28.987814 1011483 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 19:12:28.987823 1011483 cache.go:57] Caching tarball of preloaded images
	I0809 19:12:28.987911 1011483 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 19:12:28.987921 1011483 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 19:12:28.988049 1011483 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/config.json ...
	I0809 19:12:29.005383 1011483 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 19:12:29.005411 1011483 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 19:12:29.005432 1011483 cache.go:195] Successfully downloaded all kic artifacts
	I0809 19:12:29.005472 1011483 start.go:365] acquiring machines lock for pause-734678: {Name:mk982864a14b76407542f6bb437b0fbcbda019c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:12:29.005539 1011483 start.go:369] acquired machines lock for "pause-734678" in 44.104µs
	I0809 19:12:29.005556 1011483 start.go:96] Skipping create...Using existing machine configuration
	I0809 19:12:29.005561 1011483 fix.go:54] fixHost starting: 
	I0809 19:12:29.005789 1011483 cli_runner.go:164] Run: docker container inspect pause-734678 --format={{.State.Status}}
	I0809 19:12:29.022037 1011483 fix.go:102] recreateIfNeeded on pause-734678: state=Running err=<nil>
	W0809 19:12:29.022066 1011483 fix.go:128] unexpected machine state, will restart: <nil>
	I0809 19:12:29.023689 1011483 out.go:177] * Updating the running docker "pause-734678" container ...
	I0809 19:12:29.025173 1011483 machine.go:88] provisioning docker machine ...
	I0809 19:12:29.025198 1011483 ubuntu.go:169] provisioning hostname "pause-734678"
	I0809 19:12:29.025256 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:29.041994 1011483 main.go:141] libmachine: Using SSH client type: native
	I0809 19:12:29.042457 1011483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33616 <nil> <nil>}
	I0809 19:12:29.042477 1011483 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-734678 && echo "pause-734678" | sudo tee /etc/hostname
	I0809 19:12:29.195011 1011483 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-734678
	
	I0809 19:12:29.195089 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:29.213021 1011483 main.go:141] libmachine: Using SSH client type: native
	I0809 19:12:29.213433 1011483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33616 <nil> <nil>}
	I0809 19:12:29.213451 1011483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-734678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-734678/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-734678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 19:12:29.347669 1011483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 19:12:29.347703 1011483 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 19:12:29.347739 1011483 ubuntu.go:177] setting up certificates
	I0809 19:12:29.347750 1011483 provision.go:83] configureAuth start
	I0809 19:12:29.347810 1011483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-734678
	I0809 19:12:29.364198 1011483 provision.go:138] copyHostCerts
	I0809 19:12:29.364268 1011483 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 19:12:29.364289 1011483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 19:12:29.364354 1011483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 19:12:29.364457 1011483 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 19:12:29.364468 1011483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 19:12:29.364495 1011483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 19:12:29.364546 1011483 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 19:12:29.364552 1011483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 19:12:29.364571 1011483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 19:12:29.364612 1011483 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.pause-734678 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-734678]
	I0809 19:12:29.754176 1011483 provision.go:172] copyRemoteCerts
	I0809 19:12:29.754235 1011483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 19:12:29.754276 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:29.771031 1011483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33616 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/pause-734678/id_rsa Username:docker}
	I0809 19:12:29.872692 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 19:12:29.895425 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0809 19:12:29.918050 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0809 19:12:29.940110 1011483 provision.go:86] duration metric: configureAuth took 592.346833ms
	I0809 19:12:29.940140 1011483 ubuntu.go:193] setting minikube options for container-runtime
	I0809 19:12:29.940353 1011483 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:12:29.940467 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:29.956888 1011483 main.go:141] libmachine: Using SSH client type: native
	I0809 19:12:29.957308 1011483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33616 <nil> <nil>}
	I0809 19:12:29.957333 1011483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 19:12:35.324214 1011483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 19:12:35.324241 1011483 machine.go:91] provisioned docker machine in 6.299054434s
	I0809 19:12:35.324250 1011483 start.go:300] post-start starting for "pause-734678" (driver="docker")
	I0809 19:12:35.324259 1011483 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 19:12:35.324314 1011483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 19:12:35.324350 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:35.341515 1011483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33616 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/pause-734678/id_rsa Username:docker}
	I0809 19:12:35.440678 1011483 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 19:12:35.444201 1011483 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 19:12:35.444233 1011483 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 19:12:35.444242 1011483 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 19:12:35.444247 1011483 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 19:12:35.444258 1011483 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 19:12:35.444308 1011483 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 19:12:35.444374 1011483 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 19:12:35.444462 1011483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 19:12:35.452388 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 19:12:35.474025 1011483 start.go:303] post-start completed in 149.76077ms
	I0809 19:12:35.474093 1011483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:12:35.474167 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:35.490858 1011483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33616 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/pause-734678/id_rsa Username:docker}
	I0809 19:12:35.588497 1011483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 19:12:35.593458 1011483 fix.go:56] fixHost completed within 6.587888053s
	I0809 19:12:35.593483 1011483 start.go:83] releasing machines lock for "pause-734678", held for 6.587933604s
	I0809 19:12:35.593557 1011483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-734678
	I0809 19:12:35.613102 1011483 ssh_runner.go:195] Run: cat /version.json
	I0809 19:12:35.613167 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:35.613187 1011483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 19:12:35.613248 1011483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-734678
	I0809 19:12:35.632690 1011483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33616 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/pause-734678/id_rsa Username:docker}
	I0809 19:12:35.633416 1011483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33616 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/pause-734678/id_rsa Username:docker}
	I0809 19:12:35.819955 1011483 ssh_runner.go:195] Run: systemctl --version
	I0809 19:12:35.824316 1011483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 19:12:35.963917 1011483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 19:12:35.968190 1011483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:12:35.977183 1011483 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 19:12:35.977269 1011483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:12:35.985362 1011483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0809 19:12:35.985385 1011483 start.go:466] detecting cgroup driver to use...
	I0809 19:12:35.985419 1011483 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 19:12:35.985458 1011483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 19:12:35.996576 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 19:12:36.007087 1011483 docker.go:196] disabling cri-docker service (if available) ...
	I0809 19:12:36.007135 1011483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 19:12:36.019000 1011483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 19:12:36.029732 1011483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 19:12:36.150362 1011483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 19:12:36.259878 1011483 docker.go:212] disabling docker service ...
	I0809 19:12:36.259946 1011483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 19:12:36.271999 1011483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 19:12:36.282356 1011483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 19:12:36.387431 1011483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 19:12:36.490539 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 19:12:36.501844 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 19:12:36.516718 1011483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0809 19:12:36.516785 1011483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:12:36.525789 1011483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 19:12:36.525852 1011483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:12:36.534513 1011483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:12:36.543742 1011483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:12:36.553087 1011483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 19:12:36.562246 1011483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 19:12:36.570228 1011483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 19:12:36.579678 1011483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 19:12:36.889936 1011483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 19:12:37.151726 1011483 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 19:12:37.151803 1011483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 19:12:37.155400 1011483 start.go:534] Will wait 60s for crictl version
	I0809 19:12:37.155466 1011483 ssh_runner.go:195] Run: which crictl
	I0809 19:12:37.158850 1011483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 19:12:37.191372 1011483 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 19:12:37.191466 1011483 ssh_runner.go:195] Run: crio --version
	I0809 19:12:37.228518 1011483 ssh_runner.go:195] Run: crio --version
	I0809 19:12:37.266040 1011483 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0809 19:12:37.267573 1011483 cli_runner.go:164] Run: docker network inspect pause-734678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 19:12:37.284446 1011483 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0809 19:12:37.288364 1011483 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:12:37.288437 1011483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 19:12:37.327255 1011483 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 19:12:37.327276 1011483 crio.go:415] Images already preloaded, skipping extraction
	I0809 19:12:37.327318 1011483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0809 19:12:37.360887 1011483 crio.go:496] all images are preloaded for cri-o runtime.
	I0809 19:12:37.360907 1011483 cache_images.go:84] Images are preloaded, skipping loading
	I0809 19:12:37.360972 1011483 ssh_runner.go:195] Run: crio config
	I0809 19:12:37.403595 1011483 cni.go:84] Creating CNI manager for ""
	I0809 19:12:37.403615 1011483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 19:12:37.403629 1011483 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0809 19:12:37.403674 1011483 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-734678 NodeName:pause-734678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0809 19:12:37.403833 1011483 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-734678"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0809 19:12:37.403904 1011483 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-734678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:pause-734678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0809 19:12:37.403951 1011483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0809 19:12:37.412564 1011483 binaries.go:44] Found k8s binaries, skipping transfer
	I0809 19:12:37.412634 1011483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0809 19:12:37.420960 1011483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0809 19:12:37.437423 1011483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0809 19:12:37.457977 1011483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0809 19:12:37.480023 1011483 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0809 19:12:37.483431 1011483 certs.go:56] Setting up /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678 for IP: 192.168.85.2
	I0809 19:12:37.483468 1011483 certs.go:190] acquiring lock for shared ca certs: {Name:mk19b72d6df3cc07014c8108931f9946a7850469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:12:37.483597 1011483 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key
	I0809 19:12:37.483633 1011483 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key
	I0809 19:12:37.483746 1011483 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.key
	I0809 19:12:37.483808 1011483 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/apiserver.key.43b9df8c
	I0809 19:12:37.483842 1011483 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/proxy-client.key
	I0809 19:12:37.483976 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem (1338 bytes)
	W0809 19:12:37.484007 1011483 certs.go:433] ignoring /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434_empty.pem, impossibly tiny 0 bytes
	I0809 19:12:37.484017 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem (1675 bytes)
	I0809 19:12:37.484041 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem (1082 bytes)
	I0809 19:12:37.484066 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem (1123 bytes)
	I0809 19:12:37.484087 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem (1679 bytes)
	I0809 19:12:37.484129 1011483 certs.go:437] found cert: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 19:12:37.484688 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0809 19:12:37.509213 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0809 19:12:37.532169 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0809 19:12:37.557765 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0809 19:12:37.581927 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0809 19:12:37.607509 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0809 19:12:37.633238 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0809 19:12:37.656194 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0809 19:12:37.678677 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /usr/share/ca-certificates/8234342.pem (1708 bytes)
	I0809 19:12:37.701305 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0809 19:12:37.723524 1011483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/823434.pem --> /usr/share/ca-certificates/823434.pem (1338 bytes)
	I0809 19:12:37.746054 1011483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0809 19:12:37.763073 1011483 ssh_runner.go:195] Run: openssl version
	I0809 19:12:37.768788 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8234342.pem && ln -fs /usr/share/ca-certificates/8234342.pem /etc/ssl/certs/8234342.pem"
	I0809 19:12:37.778405 1011483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8234342.pem
	I0809 19:12:37.781496 1011483 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug  9 18:45 /usr/share/ca-certificates/8234342.pem
	I0809 19:12:37.781620 1011483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8234342.pem
	I0809 19:12:37.787848 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8234342.pem /etc/ssl/certs/3ec20f2e.0"
	I0809 19:12:37.796248 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0809 19:12:37.805514 1011483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0809 19:12:37.808921 1011483 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug  9 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0809 19:12:37.808981 1011483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0809 19:12:37.815439 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0809 19:12:37.823427 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/823434.pem && ln -fs /usr/share/ca-certificates/823434.pem /etc/ssl/certs/823434.pem"
	I0809 19:12:37.832146 1011483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/823434.pem
	I0809 19:12:37.835173 1011483 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug  9 18:45 /usr/share/ca-certificates/823434.pem
	I0809 19:12:37.835222 1011483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/823434.pem
	I0809 19:12:37.841713 1011483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/823434.pem /etc/ssl/certs/51391683.0"
	I0809 19:12:37.850440 1011483 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0809 19:12:37.853755 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0809 19:12:37.859971 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0809 19:12:37.867215 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0809 19:12:37.873887 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0809 19:12:37.880637 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0809 19:12:37.886694 1011483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0809 19:12:37.892924 1011483 kubeadm.go:404] StartCluster: {Name:pause-734678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-734678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:12:37.893062 1011483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0809 19:12:37.893112 1011483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0809 19:12:37.927815 1011483 cri.go:89] found id: "0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb"
	I0809 19:12:37.927834 1011483 cri.go:89] found id: "d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4"
	I0809 19:12:37.927839 1011483 cri.go:89] found id: "e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a"
	I0809 19:12:37.927842 1011483 cri.go:89] found id: "ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e"
	I0809 19:12:37.927846 1011483 cri.go:89] found id: "44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc"
	I0809 19:12:37.927854 1011483 cri.go:89] found id: "a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2"
	I0809 19:12:37.927857 1011483 cri.go:89] found id: "a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26"
	I0809 19:12:37.927860 1011483 cri.go:89] found id: "4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8"
	I0809 19:12:37.927863 1011483 cri.go:89] found id: "dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51"
	I0809 19:12:37.927870 1011483 cri.go:89] found id: "1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f"
	I0809 19:12:37.927873 1011483 cri.go:89] found id: ""
	I0809 19:12:37.927922 1011483 ssh_runner.go:195] Run: sudo runc list -f json
	I0809 19:12:37.954325 1011483 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb/userdata","rootfs":"/var/lib/containers/storage/overlay/c2dd11758b417848a335cec9da583827998bd658bbc543bbcc938b8bf86255a3/merged","created":"2023-08-09T19:12:36.8567063Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"aa1b7757","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"aa1b7757\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.ter
minationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:12:36.668579728Z","io.kubernetes.cri-o.Image":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.4","io.kubernetes.cri-o.ImageRef":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ebd01dabccf67df65bdacd0aa133a60a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-734678_ebd01dabccf67df65bdacd0aa133a60a/kube-controller-manager/1.log","io.kubernetes.cri-
o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2dd11758b417848a335cec9da583827998bd658bbc543bbcc938b8bf86255a3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-734678_kube-system_ebd01dabccf67df65bdacd0aa133a60a_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5d850272d862c0e66abb7e02f3d3c07f59ba2028ae7e6ec15bdebbe7a61fd6bb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d850272d862c0e66abb7e02f3d3c07f59ba2028ae7e6ec15bdebbe7a61fd6bb","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-734678_kube-system_ebd01dabccf67df65bdacd0aa133a60a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\
"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ebd01dabccf67df65bdacd0aa133a60a/containers/kube-controller-manager/e631504e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ebd01dabccf67df65bdacd0aa133a60a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/mi
nikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ebd01dabccf67df65bdacd0aa133a60a","kubernetes.io/config.hash":"ebd01dabccf67df65bdacd0aa133a60a","kubernetes.io/config.seen":"2023-08-09T19:11:35.266155650Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-con
tainers/1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f/userdata","rootfs":"/var/lib/containers/storage/overlay/2c99fb7933a8b9f49b464b86c673aae04bfca91f32d05f4ca84f614249b73816/merged","created":"2023-08-09T19:11:35.805037781Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b9b6fd37","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b9b6fd37\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f","io.kubernetes.cri-o.ContainerType":"container","io.kubern
etes.cri-o.Created":"2023-08-09T19:11:35.766677993Z","io.kubernetes.cri-o.Image":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"413705c441e799ce9fe2022cf83d0596\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-734678_413705c441e799ce9fe2022cf83d0596/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2c99fb7933a8b9f49b464b86c673aae04bfca91f32d05f4ca84f614249b73816/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-734678_kube-system_413705c441e799ce9fe2022cf83d0596_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/ove
rlay-containers/354b8567d20f9022b10f199da0d9d221e157bc1a5007efe0f19613df47655a59/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"354b8567d20f9022b10f199da0d9d221e157bc1a5007efe0f19613df47655a59","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-734678_kube-system_413705c441e799ce9fe2022cf83d0596_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/413705c441e799ce9fe2022cf83d0596/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/413705c441e799ce9fe2022cf83d0596/containers/etcd/ec164ab4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"
selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"413705c441e799ce9fe2022cf83d0596","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.85.2:2379","kubernetes.io/config.hash":"413705c441e799ce9fe2022cf83d0596","kubernetes.io/config.seen":"2023-08-09T19:11:35.266147501Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc/userdata","rootfs":"/var/lib/containers/storage/overlay/40315ed2aa4026db679fead8a011e9e9d81495f5ad8b3e360f0814dca5ceed31/merged","created
":"2023-08-09T19:11:54.702764185Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"86e5b2d","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"86e5b2d\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:11:54.596867794Z","io.kubernetes.cri-o.Image":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindne
td:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-xxgzn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9265ba13-07f0-4c44-a920-74175ec0e07a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-xxgzn_9265ba13-07f0-4c44-a920-74175ec0e07a/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/40315ed2aa4026db679fead8a011e9e9d81495f5ad8b3e360f0814dca5ceed31/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-xxgzn_kube-system_9265ba13-07f0-4c44-a920-74175ec0e07a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b3345840237267ff07cd1adf95864b9bf4139b9ed6a1d79057f1ade8554548d9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b3345840237267ff07cd1adf
95864b9bf4139b9ed6a1d79057f1ade8554548d9","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-xxgzn_kube-system_9265ba13-07f0-4c44-a920-74175ec0e07a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9265ba13-07f0-4c44-a920-74175ec0e07a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9265ba13-07f0-4c44-a920-74175ec0e07a/containers/kindnet-cni/b2737b23\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9265ba13-07f0-4c44-a920-74175ec0e07a/volumes/kubernetes.io~projected/kube-api-access-s79th\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-xxgzn","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9265ba13-07f0-4c44-a920-74175ec0e07a","kubernetes.io/config.seen":"2023-08-09T19:11:53.616904447Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8/userdata","rootfs":"/var/lib/containers/storage/overlay/cccb8f6c78a8bd3ad1e6786c946
e19333c2136237e9cea5b93b4d3a7061a977b/merged","created":"2023-08-09T19:11:35.87352766Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"aa1b7757","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"aa1b7757\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:11:35.789890678Z","io.kubernetes.cri-o.Image":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f
13f286a5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.4","io.kubernetes.cri-o.ImageRef":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ebd01dabccf67df65bdacd0aa133a60a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-734678_ebd01dabccf67df65bdacd0aa133a60a/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cccb8f6c78a8bd3ad1e6786c946e19333c2136237e9cea5b93b4d3a7061a977b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-734678_kube-system_ebd01dabccf67df65bdacd0aa133a60a_0","io.kubernetes.cri-o.ResolvPath":"/run/containe
rs/storage/overlay-containers/5d850272d862c0e66abb7e02f3d3c07f59ba2028ae7e6ec15bdebbe7a61fd6bb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d850272d862c0e66abb7e02f3d3c07f59ba2028ae7e6ec15bdebbe7a61fd6bb","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-734678_kube-system_ebd01dabccf67df65bdacd0aa133a60a_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ebd01dabccf67df65bdacd0aa133a60a/containers/kube-controller-manager/73945e9e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ebd01dabccf67df65bdacd0aa133a60a/etc-
hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"r
eadonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ebd01dabccf67df65bdacd0aa133a60a","kubernetes.io/config.hash":"ebd01dabccf67df65bdacd0aa133a60a","kubernetes.io/config.seen":"2023-08-09T19:11:35.266155650Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26/userdata","rootfs":"/var/lib/containers/storage/overlay/905549c375cd553438b2508047ccd45780d8ee8f6a99ecc624ede41090532d01/merged","created":"2023-08-09T19:11:35.870547226Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"373e41ff","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.co
ntainer.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"373e41ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:11:35.79088188Z","io.kubernetes.cri-o.Image":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.27.4","io.kubernetes.cri-o.ImageRef":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube
-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"64fb80813b5d31de4bd6500f347a3baf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-734678_64fb80813b5d31de4bd6500f347a3baf/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/905549c375cd553438b2508047ccd45780d8ee8f6a99ecc624ede41090532d01/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-734678_kube-system_64fb80813b5d31de4bd6500f347a3baf_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/96ff7fff17c069381457285f26882a42f985c5f8f4ba78174aaf3fbd2a2d9631/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"96ff7fff17c069381457285f26882a42f985c5f8f4ba78174aaf3fbd2a2d9631","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-734678_kube-system_64fb80813b5d31de4bd6500f347a
3baf_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/64fb80813b5d31de4bd6500f347a3baf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/64fb80813b5d31de4bd6500f347a3baf/containers/kube-scheduler/09f9d0fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"64fb80813b5d31de4bd6500f347a3baf","kubernetes.io/config.hash":"64fb80813b5d
31de4bd6500f347a3baf","kubernetes.io/config.seen":"2023-08-09T19:11:35.266157417Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2/userdata","rootfs":"/var/lib/containers/storage/overlay/dabfa913fa40a6d67ebc37dd611a8c20415ca22eef65746b844f8ec4fe2a12d5/merged","created":"2023-08-09T19:11:54.695941523Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c739e96","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c739e96\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationM
essagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:11:54.586549182Z","io.kubernetes.cri-o.Image":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.4","io.kubernetes.cri-o.ImageRef":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-q25ss\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"23d08f14-790a-46be-87b1-032c144a76cb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-q25ss_23d08f14-790a-46be-87b1-032c144a76cb/kube-proxy/0.log","io.kubernetes.cri-o.Me
tadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dabfa913fa40a6d67ebc37dd611a8c20415ca22eef65746b844f8ec4fe2a12d5/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-q25ss_kube-system_23d08f14-790a-46be-87b1-032c144a76cb_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/10a4f63fe91299bda7c87ceab088af23f72fc63df7d535bae813bae934ece015/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"10a4f63fe91299bda7c87ceab088af23f72fc63df7d535bae813bae934ece015","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-q25ss_kube-system_23d08f14-790a-46be-87b1-032c144a76cb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules
\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/23d08f14-790a-46be-87b1-032c144a76cb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/23d08f14-790a-46be-87b1-032c144a76cb/containers/kube-proxy/e21a3f69\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/23d08f14-790a-46be-87b1-032c144a76cb/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/23d08f14-790a-46be-87b1-032c144a76cb/volumes/kubernetes.io~projected/kube-api-access-854kj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-
proxy-q25ss","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"23d08f14-790a-46be-87b1-032c144a76cb","kubernetes.io/config.seen":"2023-08-09T19:11:53.613753231Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e/userdata","rootfs":"/var/lib/containers/storage/overlay/34c5295ae803f5bdd9e84e3c4fc69367056829f26e37a693b203a59d667c3038/merged","created":"2023-08-09T19:12:26.112045891Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9c1ea2be","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9c1ea2be\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:12:26.084636
508Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-zwnjn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7c939e8b-f847-44a3-984e-6276b66d3afc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-zwnjn_7c939e8b-f847-44a3-984e-6276b66d3afc/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/34c5295ae803f5bdd9e84e3c4fc69367056829f26e37a693b203a59d667c3038/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-zwnjn_kube-system_7c939e8b-f847-44a3-984e-6276b66d3afc_0","io.kubernet
es.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b84b19cee96f1e695c1e86943773c0e8a9b43ae52c30af05ba93ec149416d8f1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b84b19cee96f1e695c1e86943773c0e8a9b43ae52c30af05ba93ec149416d8f1","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-zwnjn_kube-system_7c939e8b-f847-44a3-984e-6276b66d3afc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/7c939e8b-f847-44a3-984e-6276b66d3afc/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7c939e8b-f847-44a3-984e-6276b66d3afc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/
var/lib/kubelet/pods/7c939e8b-f847-44a3-984e-6276b66d3afc/containers/coredns/481560e7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7c939e8b-f847-44a3-984e-6276b66d3afc/volumes/kubernetes.io~projected/kube-api-access-bdpzl\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-zwnjn","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7c939e8b-f847-44a3-984e-6276b66d3afc","kubernetes.io/config.seen":"2023-08-09T19:12:25.721406957Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4/userdata","rootfs":"/var/lib/containers/st
orage/overlay/1635b6e3bb53b2d549c941358f80d90221afb54cbe8e3ca338bcca03e0bcbfd3/merged","created":"2023-08-09T19:12:36.797059839Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"373e41ff","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"373e41ff\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:12:36.657059928Z","io.kubernetes.cri-o.Image":"98ef2570f3cde33e2d94e0d
55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.27.4","io.kubernetes.cri-o.ImageRef":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"64fb80813b5d31de4bd6500f347a3baf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-734678_64fb80813b5d31de4bd6500f347a3baf/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1635b6e3bb53b2d549c941358f80d90221afb54cbe8e3ca338bcca03e0bcbfd3/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-734678_kube-system_64fb80813b5d31de4bd6500f347a3baf_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-contai
ners/96ff7fff17c069381457285f26882a42f985c5f8f4ba78174aaf3fbd2a2d9631/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"96ff7fff17c069381457285f26882a42f985c5f8f4ba78174aaf3fbd2a2d9631","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-734678_kube-system_64fb80813b5d31de4bd6500f347a3baf_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/64fb80813b5d31de4bd6500f347a3baf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/64fb80813b5d31de4bd6500f347a3baf/containers/kube-scheduler/45a2f772\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":t
rue,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"64fb80813b5d31de4bd6500f347a3baf","kubernetes.io/config.hash":"64fb80813b5d31de4bd6500f347a3baf","kubernetes.io/config.seen":"2023-08-09T19:11:35.266157417Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51/userdata","rootfs":"/var/lib/containers/storage/overlay/e97a842e916710626e8ab9f0511c249d4f9bc4445bb66a0c0eb28078bebf7c50/merged","created":"2023-08-09T19:11:35.870590793Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"be341997","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount"
:"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"be341997\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:11:35.789342854Z","io.kubernetes.cri-o.Image":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.4","io.kubernetes.cri-o.ImageRef":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.ku
bernetes.pod.name\":\"kube-apiserver-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"468974dd0ea315ea1ba76795e744d396\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-734678_468974dd0ea315ea1ba76795e744d396/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e97a842e916710626e8ab9f0511c249d4f9bc4445bb66a0c0eb28078bebf7c50/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-734678_kube-system_468974dd0ea315ea1ba76795e744d396_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/28e451f77eb1d445b293f5d94e4f49468d2eaf929f6e8f26ee972834bc6ac962/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"28e451f77eb1d445b293f5d94e4f49468d2eaf929f6e8f26ee972834bc6ac962","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-734678_kube-system_468974dd0ea315ea1ba76795e744d396_0","io.kubernet
es.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/468974dd0ea315ea1ba76795e744d396/containers/kube-apiserver/64f47824\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/468974dd0ea315ea1ba76795e744d396/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":fa
lse},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"468974dd0ea315ea1ba76795e744d396","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.85.2:8443","kubernetes.io/config.hash":"468974dd0ea315ea1ba76795e744d396","kubernetes.io/config.seen":"2023-08-09T19:11:35.266153587Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ff
db5e56aaa9b344eaa8a/userdata","rootfs":"/var/lib/containers/storage/overlay/fb0d54bd1f9c94a2511f1369893bacbbf3b230e61607b1f879970494fbaae1ec/merged","created":"2023-08-09T19:12:36.796836748Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b9b6fd37","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b9b6fd37\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-09T19:12:36.615189885Z",
"io.kubernetes.cri-o.Image":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-734678\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"413705c441e799ce9fe2022cf83d0596\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-734678_413705c441e799ce9fe2022cf83d0596/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fb0d54bd1f9c94a2511f1369893bacbbf3b230e61607b1f879970494fbaae1ec/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-734678_kube-system_413705c441e799ce9fe2022cf83d0596_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/354b8567d20f9022b10f199
da0d9d221e157bc1a5007efe0f19613df47655a59/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"354b8567d20f9022b10f199da0d9d221e157bc1a5007efe0f19613df47655a59","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-734678_kube-system_413705c441e799ce9fe2022cf83d0596_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/413705c441e799ce9fe2022cf83d0596/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/413705c441e799ce9fe2022cf83d0596/containers/etcd/0aa2055c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_p
ath\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-734678","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"413705c441e799ce9fe2022cf83d0596","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.85.2:2379","kubernetes.io/config.hash":"413705c441e799ce9fe2022cf83d0596","kubernetes.io/config.seen":"2023-08-09T19:11:35.266147501Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0809 19:12:37.954918 1011483 cri.go:126] list returned 10 containers
	I0809 19:12:37.954938 1011483 cri.go:129] container: {ID:0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb Status:stopped}
	I0809 19:12:37.954957 1011483 cri.go:135] skipping {0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb stopped}: state = "stopped", want "paused"
	I0809 19:12:37.954971 1011483 cri.go:129] container: {ID:1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f Status:stopped}
	I0809 19:12:37.954979 1011483 cri.go:135] skipping {1815cd6e3809d47f006ca917b6fb3bfbff31768e0c1ab487d5304344b5ce6b2f stopped}: state = "stopped", want "paused"
	I0809 19:12:37.954990 1011483 cri.go:129] container: {ID:44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc Status:stopped}
	I0809 19:12:37.955003 1011483 cri.go:135] skipping {44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955014 1011483 cri.go:129] container: {ID:4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8 Status:stopped}
	I0809 19:12:37.955026 1011483 cri.go:135] skipping {4e02632e8a63af2ffa23af252ca8e2343013dfb43ccb909ec489b3c3fe4985b8 stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955037 1011483 cri.go:129] container: {ID:a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26 Status:stopped}
	I0809 19:12:37.955045 1011483 cri.go:135] skipping {a0b48871f29ff21b11516b6eb06576a0ee2aa2a10c9cc4903e32d05a15c43c26 stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955055 1011483 cri.go:129] container: {ID:a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 Status:stopped}
	I0809 19:12:37.955067 1011483 cri.go:135] skipping {a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955078 1011483 cri.go:129] container: {ID:ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e Status:stopped}
	I0809 19:12:37.955091 1011483 cri.go:135] skipping {ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955102 1011483 cri.go:129] container: {ID:d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 Status:stopped}
	I0809 19:12:37.955114 1011483 cri.go:135] skipping {d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955125 1011483 cri.go:129] container: {ID:dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51 Status:stopped}
	I0809 19:12:37.955136 1011483 cri.go:135] skipping {dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51 stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955141 1011483 cri.go:129] container: {ID:e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a Status:stopped}
	I0809 19:12:37.955153 1011483 cri.go:135] skipping {e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a stopped}: state = "stopped", want "paused"
	I0809 19:12:37.955206 1011483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0809 19:12:37.963872 1011483 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0809 19:12:37.963890 1011483 kubeadm.go:636] restartCluster start
	I0809 19:12:37.963934 1011483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0809 19:12:37.972170 1011483 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:37.972872 1011483 kubeconfig.go:92] found "pause-734678" server: "https://192.168.85.2:8443"
	I0809 19:12:37.973932 1011483 kapi.go:59] client config for pause-734678: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 19:12:37.974765 1011483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0809 19:12:37.982811 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:37.982855 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:37.992403 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:37.992423 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:37.992465 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:38.002942 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:38.503682 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:38.503777 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:38.514030 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:39.003895 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:39.003959 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:39.013734 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:39.503220 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:39.503302 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:39.513506 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:40.003034 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:40.003122 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:40.013757 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:40.503284 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:40.503375 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:40.513516 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:41.003063 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:41.003164 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:41.013203 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:41.503324 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:41.503426 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:41.513644 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:42.004073 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:42.004176 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0809 19:12:42.016018 1011483 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:12:42.503438 1011483 api_server.go:166] Checking apiserver status ...
	I0809 19:12:42.503526 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:12:42.513729 1011483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3156/cgroup
	I0809 19:12:42.522236 1011483 api_server.go:182] apiserver freezer: "10:freezer:/docker/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/crio/crio-f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6"
	I0809 19:12:42.522316 1011483 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/crio/crio-f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6/freezer.state
	I0809 19:12:42.530642 1011483 api_server.go:204] freezer state: "THAWED"
	I0809 19:12:42.530672 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:12:47.531348 1011483 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0809 19:12:47.531402 1011483 retry.go:31] will retry after 246.710256ms: state is "Stopped"
	I0809 19:12:47.778840 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:12:52.779748 1011483 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0809 19:12:52.779801 1011483 retry.go:31] will retry after 309.763756ms: state is "Stopped"
	I0809 19:12:53.090217 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:12:57.783360 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 19:12:57.783417 1011483 kubeadm.go:611] needs reconfigure: apiserver error: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 19:12:57.783430 1011483 kubeadm.go:1128] stopping kube-system containers ...
	I0809 19:12:57.783444 1011483 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0809 19:12:57.783532 1011483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0809 19:12:58.058792 1011483 cri.go:89] found id: "0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9"
	I0809 19:12:58.058819 1011483 cri.go:89] found id: "3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8"
	I0809 19:12:58.058845 1011483 cri.go:89] found id: "4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441"
	I0809 19:12:58.058852 1011483 cri.go:89] found id: "bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2"
	I0809 19:12:58.058858 1011483 cri.go:89] found id: "d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21"
	I0809 19:12:58.058864 1011483 cri.go:89] found id: "f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6"
	I0809 19:12:58.058870 1011483 cri.go:89] found id: "0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb"
	I0809 19:12:58.058875 1011483 cri.go:89] found id: "d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4"
	I0809 19:12:58.058881 1011483 cri.go:89] found id: "e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a"
	I0809 19:12:58.058892 1011483 cri.go:89] found id: "ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e"
	I0809 19:12:58.058902 1011483 cri.go:89] found id: "44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc"
	I0809 19:12:58.058905 1011483 cri.go:89] found id: "a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2"
	I0809 19:12:58.058909 1011483 cri.go:89] found id: "dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51"
	I0809 19:12:58.058912 1011483 cri.go:89] found id: ""
	I0809 19:12:58.058917 1011483 cri.go:234] Stopping containers: [0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9 3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8 4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441 bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2 d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21 f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e 44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51]
	I0809 19:12:58.058972 1011483 ssh_runner.go:195] Run: which crictl
	I0809 19:12:58.062921 1011483 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9 3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8 4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441 bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2 d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21 f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e 44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51
	I0809 19:13:13.930235 1011483 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9 3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8 4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441 bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2 d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21 f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e 44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51: (15.867258495s)
	W0809 19:13:13.930350 1011483 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9 3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8 4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441 bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2 d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21 f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4 e6c4b5962a406b09a27bd7dc77eee0e2ade33f2c0b5ffdb5e56aaa9b344eaa8a ceb75a846b2ac678e3e3ce971acf343d80f085df68a6663f72eb2cb74fea7b4e 44f9cf9fa61794e73843bbea5b2164807a6b9e3c34809d6a39002ce25ea1f4dc a39a01205edcecfec13a3acfed1c1b41936c28e167ff94b98d310c3d0a0078d2 dece7c83b1b6870e3c39b80f058b2d294d2fb32a71d9125f40bac769a7ae1f51: Proce
ss exited with status 1
	stdout:
	0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9
	3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8
	4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441
	bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2
	d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21
	f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6
	
	stderr:
	E0809 19:13:13.926929    3517 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb\": container with ID starting with 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb not found: ID does not exist" containerID="0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb"
	time="2023-08-09T19:13:13Z" level=fatal msg="stopping the container \"0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb\": rpc error: code = NotFound desc = could not find container \"0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb\": container with ID starting with 0a8bfc8b18c3e18e2b7746b4b426c7353167c6d51b8adadceb7c4b2e0b998deb not found: ID does not exist"
	I0809 19:13:13.930421 1011483 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0809 19:13:14.038407 1011483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0809 19:13:14.047075 1011483 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug  9 19:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug  9 19:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Aug  9 19:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug  9 19:11 /etc/kubernetes/scheduler.conf
	
	I0809 19:13:14.047140 1011483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0809 19:13:14.056754 1011483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0809 19:13:14.068246 1011483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0809 19:13:14.080900 1011483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:13:14.080969 1011483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0809 19:13:14.090995 1011483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0809 19:13:14.099564 1011483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0809 19:13:14.099623 1011483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0809 19:13:14.109265 1011483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0809 19:13:14.129172 1011483 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0809 19:13:14.129200 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:14.188780 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:14.637727 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:14.814468 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:14.894762 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:15.071671 1011483 api_server.go:52] waiting for apiserver process to appear ...
	I0809 19:13:15.071759 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:15.092474 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:15.605848 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:16.105327 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:16.117143 1011483 api_server.go:72] duration metric: took 1.045473865s to wait for apiserver process to appear ...
	I0809 19:13:16.117167 1011483 api_server.go:88] waiting for apiserver healthz status ...
	I0809 19:13:16.117183 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:18.563648 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0809 19:13:18.563688 1011483 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 19:13:18.563702 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:18.673938 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0809 19:13:18.673992 1011483 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0809 19:13:19.174956 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:19.179978 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0809 19:13:19.180012 1011483 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0809 19:13:19.674491 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:19.680069 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0809 19:13:19.680101 1011483 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0809 19:13:20.174715 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:20.179262 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0809 19:13:20.186815 1011483 api_server.go:141] control plane version: v1.27.4
	I0809 19:13:20.186846 1011483 api_server.go:131] duration metric: took 4.06967094s to wait for apiserver health ...
	I0809 19:13:20.186858 1011483 cni.go:84] Creating CNI manager for ""
	I0809 19:13:20.186866 1011483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 19:13:20.188596 1011483 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0809 19:13:20.190048 1011483 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0809 19:13:20.193932 1011483 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0809 19:13:20.193949 1011483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0809 19:13:20.211263 1011483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0809 19:13:20.884821 1011483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 19:13:20.891440 1011483 system_pods.go:59] 7 kube-system pods found
	I0809 19:13:20.891469 1011483 system_pods.go:61] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0809 19:13:20.891477 1011483 system_pods.go:61] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0809 19:13:20.891482 1011483 system_pods.go:61] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:20.891488 1011483 system_pods.go:61] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0809 19:13:20.891496 1011483 system_pods.go:61] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0809 19:13:20.891500 1011483 system_pods.go:61] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:20.891507 1011483 system_pods.go:61] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0809 19:13:20.891513 1011483 system_pods.go:74] duration metric: took 6.670221ms to wait for pod list to return data ...
	I0809 19:13:20.891522 1011483 node_conditions.go:102] verifying NodePressure condition ...
	I0809 19:13:20.894026 1011483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 19:13:20.894049 1011483 node_conditions.go:123] node cpu capacity is 8
	I0809 19:13:20.894059 1011483 node_conditions.go:105] duration metric: took 2.533035ms to run NodePressure ...
	I0809 19:13:20.894075 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0809 19:13:21.046940 1011483 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0809 19:13:21.050884 1011483 kubeadm.go:787] kubelet initialised
	I0809 19:13:21.050907 1011483 kubeadm.go:788] duration metric: took 3.939723ms waiting for restarted kubelet to initialise ...
	I0809 19:13:21.050915 1011483 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:21.056014 1011483 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:22.073376 1011483 pod_ready.go:92] pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:22.073401 1011483 pod_ready.go:81] duration metric: took 1.017363406s waiting for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:22.073412 1011483 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:24.095042 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:26.595823 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:29.094464 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:31.095010 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:33.095618 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.095672 1011483 pod_ready.go:81] duration metric: took 11.022251336s waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.095696 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101219 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.101243 1011483 pod_ready.go:81] duration metric: took 5.531107ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101256 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107554 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.107578 1011483 pod_ready.go:81] duration metric: took 6.313562ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107591 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.112970 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.112993 1011483 pod_ready.go:81] duration metric: took 5.393412ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.113005 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118913 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.118936 1011483 pod_ready.go:81] duration metric: took 5.923321ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118945 1011483 pod_ready.go:38] duration metric: took 12.068019318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.118968 1011483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 19:13:33.126814 1011483 ops.go:34] apiserver oom_adj: -16
	I0809 19:13:33.126837 1011483 kubeadm.go:640] restartCluster took 55.162938995s
	I0809 19:13:33.126844 1011483 kubeadm.go:406] StartCluster complete in 55.233934514s
	I0809 19:13:33.126858 1011483 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.126931 1011483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:13:33.128886 1011483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.130392 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 19:13:33.130359 1011483 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0809 19:13:33.130393 1011483 kapi.go:59] client config for pause-734678: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 19:13:33.132662 1011483 out.go:177] * Enabled addons: 
	I0809 19:13:33.131257 1011483 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:33.134254 1011483 addons.go:502] enable addons completed in 3.932609ms: enabled=[]
	I0809 19:13:33.134668 1011483 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-734678" context rescaled to 1 replicas
	I0809 19:13:33.134708 1011483 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 19:13:33.136299 1011483 out.go:177] * Verifying Kubernetes components...
	I0809 19:13:33.137857 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:33.213252 1011483 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0809 19:13:33.213254 1011483 node_ready.go:35] waiting up to 6m0s for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293435 1011483 node_ready.go:49] node "pause-734678" has status "Ready":"True"
	I0809 19:13:33.293459 1011483 node_ready.go:38] duration metric: took 80.174189ms waiting for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293468 1011483 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.495214 1011483 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.893371 1011483 pod_ready.go:92] pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.964186 1011483 pod_ready.go:81] duration metric: took 468.932979ms waiting for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.964221 1011483 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293243 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.293267 1011483 pod_ready.go:81] duration metric: took 329.02896ms waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293279 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693726 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.693766 1011483 pod_ready.go:81] duration metric: took 400.47938ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693783 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093279 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.093303 1011483 pod_ready.go:81] duration metric: took 399.512359ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093313 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493610 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.493634 1011483 pod_ready.go:81] duration metric: took 400.315645ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493646 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893303 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.893328 1011483 pod_ready.go:81] duration metric: took 399.676794ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893339 1011483 pod_ready.go:38] duration metric: took 2.599855521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:35.893356 1011483 api_server.go:52] waiting for apiserver process to appear ...
	I0809 19:13:35.893413 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:35.906008 1011483 api_server.go:72] duration metric: took 2.771266372s to wait for apiserver process to appear ...
	I0809 19:13:35.906040 1011483 api_server.go:88] waiting for apiserver healthz status ...
	I0809 19:13:35.906061 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:35.911748 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0809 19:13:35.912824 1011483 api_server.go:141] control plane version: v1.27.4
	I0809 19:13:35.912846 1011483 api_server.go:131] duration metric: took 6.798164ms to wait for apiserver health ...
	I0809 19:13:35.912856 1011483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 19:13:36.097199 1011483 system_pods.go:59] 7 kube-system pods found
	I0809 19:13:36.097230 1011483 system_pods.go:61] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.097235 1011483 system_pods.go:61] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.097239 1011483 system_pods.go:61] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.097244 1011483 system_pods.go:61] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.097248 1011483 system_pods.go:61] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.097253 1011483 system_pods.go:61] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.097256 1011483 system_pods.go:61] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.097266 1011483 system_pods.go:74] duration metric: took 184.400786ms to wait for pod list to return data ...
	I0809 19:13:36.097275 1011483 default_sa.go:34] waiting for default service account to be created ...
	I0809 19:13:36.293083 1011483 default_sa.go:45] found service account: "default"
	I0809 19:13:36.293112 1011483 default_sa.go:55] duration metric: took 195.830656ms for default service account to be created ...
	I0809 19:13:36.293123 1011483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 19:13:36.501290 1011483 system_pods.go:86] 7 kube-system pods found
	I0809 19:13:36.501318 1011483 system_pods.go:89] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.501324 1011483 system_pods.go:89] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.501328 1011483 system_pods.go:89] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.501332 1011483 system_pods.go:89] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.501336 1011483 system_pods.go:89] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.501343 1011483 system_pods.go:89] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.501349 1011483 system_pods.go:89] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.501358 1011483 system_pods.go:126] duration metric: took 208.229085ms to wait for k8s-apps to be running ...
	I0809 19:13:36.501367 1011483 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 19:13:36.501418 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:36.533536 1011483 system_svc.go:56] duration metric: took 32.149726ms WaitForService to wait for kubelet.
	I0809 19:13:36.533577 1011483 kubeadm.go:581] duration metric: took 3.398838007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 19:13:36.533602 1011483 node_conditions.go:102] verifying NodePressure condition ...
	I0809 19:13:36.693814 1011483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 19:13:36.693838 1011483 node_conditions.go:123] node cpu capacity is 8
	I0809 19:13:36.693849 1011483 node_conditions.go:105] duration metric: took 160.242208ms to run NodePressure ...
	I0809 19:13:36.693859 1011483 start.go:228] waiting for startup goroutines ...
	I0809 19:13:36.693865 1011483 start.go:233] waiting for cluster config update ...
	I0809 19:13:36.693871 1011483 start.go:242] writing updated cluster config ...
	I0809 19:13:36.694238 1011483 ssh_runner.go:195] Run: rm -f paused
	I0809 19:13:36.762686 1011483 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0809 19:13:36.765170 1011483 out.go:177] * Done! kubectl is now configured to use "pause-734678" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-734678
helpers_test.go:235: (dbg) docker inspect pause-734678:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af",
	        "Created": "2023-08-09T19:11:24.549859266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1001180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T19:11:24.846197287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51eee4927f7e218e70017d38db072c77f0b6036bbfe389eac8043694e7529d58",
	        "ResolvConfPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/hostname",
	        "HostsPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/hosts",
	        "LogPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af-json.log",
	        "Name": "/pause-734678",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-734678:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-734678",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb-init/diff:/var/lib/docker/overlay2/dffcbda35d4e6780372e77e03c9f976a612c164e3ac348da817dd7b6996e96fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-734678",
	                "Source": "/var/lib/docker/volumes/pause-734678/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-734678",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-734678",
	                "name.minikube.sigs.k8s.io": "pause-734678",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f676634a700c3f44875cefcdbe71ad06cbcb8db26e7e22f71623fbbec48bb608",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33616"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33615"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33612"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33614"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33613"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f676634a700c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-734678": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "455c5c1d8c5d",
	                        "pause-734678"
	                    ],
	                    "NetworkID": "8e065b7d722331af1dd6c2f0d877c8db09a553617a646a1b3a8e8b1b15ce4d3a",
	                    "EndpointID": "d0ab2d877dcb4ed4c0260ff81533f81e4b3216644fcf039454aa2ee86965348b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-734678 -n pause-734678
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-734678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-734678 logs -n 25: (1.57931315s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-734678                | pause-734678           | jenkins | v1.31.1 | 09 Aug 23 19:12 UTC | 09 Aug 23 19:13 UTC |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -p cert-expiration-023346      | cert-expiration-023346 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | --memory=2048                  |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h        |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 pgrep -a        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | kubelet                        |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-023346      | cert-expiration-023346 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	| start   | -p kindnet-393336              | kindnet-393336         | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | --memory=3072                  |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                        |         |         |                     |                     |
	|         | --wait-timeout=15m             |                        |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker  |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/nsswitch.conf             |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/hosts                     |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/resolv.conf               |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo crictl     | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | pods                           |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo crictl ps  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | --all                          |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo find       | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/cni -type f -exec sh -c   |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo ip a s     | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	| ssh     | -p auto-393336 sudo ip r s     | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	| ssh     | -p auto-393336 sudo            | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | iptables-save                  |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo iptables   | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | -t nat -L -n -v                |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | status kubelet --all --full    |                        |         |         |                     |                     |
	|         | --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat kubelet --no-pager         |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo journalctl | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | -xeu kubelet --all --full      |                        |         |         |                     |                     |
	|         | --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/kubernetes/kubelet.conf   |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /var/lib/kubelet/config.yaml   |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | status docker --all --full     |                        |         |         |                     |                     |
	|         | --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat docker --no-pager          |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat        | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | /etc/docker/daemon.json        |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo docker     | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | system info                    |                        |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl  | auto-393336            | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | status cri-docker --all --full |                        |         |         |                     |                     |
	|         | --no-pager                     |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 19:13:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 19:13:29.413362 1020043 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:13:29.413500 1020043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:13:29.413509 1020043 out.go:309] Setting ErrFile to fd 2...
	I0809 19:13:29.413514 1020043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:13:29.413707 1020043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:13:29.414312 1020043 out.go:303] Setting JSON to false
	I0809 19:13:29.425334 1020043 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10565,"bootTime":1691597845,"procs":843,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:13:29.425427 1020043 start.go:138] virtualization: kvm guest
	I0809 19:13:29.427788 1020043 out.go:177] * [kindnet-393336] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:13:29.429355 1020043 notify.go:220] Checking for updates...
	I0809 19:13:29.430699 1020043 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:13:29.434523 1020043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:13:29.435914 1020043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:13:29.437283 1020043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:13:29.438542 1020043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:13:29.440138 1020043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:13:29.442000 1020043 config.go:182] Loaded profile config "auto-393336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:29.442109 1020043 config.go:182] Loaded profile config "kubernetes-upgrade-222913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.0
	I0809 19:13:29.442226 1020043 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:29.442310 1020043 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:13:29.467570 1020043 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:13:29.467693 1020043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:13:29.528233 1020043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:13:29.517937089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:13:29.528439 1020043 docker.go:294] overlay module found
	I0809 19:13:29.531005 1020043 out.go:177] * Using the docker driver based on user configuration
	I0809 19:13:29.532334 1020043 start.go:298] selected driver: docker
	I0809 19:13:29.532350 1020043 start.go:901] validating driver "docker" against <nil>
	I0809 19:13:29.532363 1020043 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:13:29.533282 1020043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:13:29.609805 1020043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:13:29.600660145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:13:29.609982 1020043 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 19:13:29.610267 1020043 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 19:13:29.611994 1020043 out.go:177] * Using Docker driver with root privileges
	I0809 19:13:29.613215 1020043 cni.go:84] Creating CNI manager for "kindnet"
	I0809 19:13:29.613242 1020043 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 19:13:29.613255 1020043 start_flags.go:319] config:
	{Name:kindnet-393336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-393336 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:13:29.614735 1020043 out.go:177] * Starting control plane node kindnet-393336 in cluster kindnet-393336
	I0809 19:13:29.615859 1020043 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 19:13:29.617057 1020043 out.go:177] * Pulling base image ...
	I0809 19:13:29.618166 1020043 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:13:29.618214 1020043 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 19:13:29.618222 1020043 cache.go:57] Caching tarball of preloaded images
	I0809 19:13:29.618284 1020043 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 19:13:29.618351 1020043 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 19:13:29.618367 1020043 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 19:13:29.618519 1020043 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/config.json ...
	I0809 19:13:29.618542 1020043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/config.json: {Name:mk1eb5b3166e5455a245a78e2a4f67ed67296e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:29.636675 1020043 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 19:13:29.636707 1020043 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 19:13:29.636726 1020043 cache.go:195] Successfully downloaded all kic artifacts
	I0809 19:13:29.636779 1020043 start.go:365] acquiring machines lock for kindnet-393336: {Name:mkb40a2131763f1ac0cb1dbeabdd4af29bdfcfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:13:29.636897 1020043 start.go:369] acquired machines lock for "kindnet-393336" in 94.819µs
	I0809 19:13:29.636929 1020043 start.go:93] Provisioning new machine with config: &{Name:kindnet-393336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-393336 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 19:13:29.637040 1020043 start.go:125] createHost starting for "" (driver="docker")
	I0809 19:13:29.094464 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:31.095010 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:33.095618 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.095672 1011483 pod_ready.go:81] duration metric: took 11.022251336s waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.095696 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101219 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.101243 1011483 pod_ready.go:81] duration metric: took 5.531107ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101256 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107554 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.107578 1011483 pod_ready.go:81] duration metric: took 6.313562ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107591 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.112970 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.112993 1011483 pod_ready.go:81] duration metric: took 5.393412ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.113005 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118913 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.118936 1011483 pod_ready.go:81] duration metric: took 5.923321ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118945 1011483 pod_ready.go:38] duration metric: took 12.068019318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.118968 1011483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 19:13:33.126814 1011483 ops.go:34] apiserver oom_adj: -16
	I0809 19:13:33.126837 1011483 kubeadm.go:640] restartCluster took 55.162938995s
	I0809 19:13:33.126844 1011483 kubeadm.go:406] StartCluster complete in 55.233934514s
	I0809 19:13:33.126858 1011483 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.126931 1011483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:13:33.128886 1011483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.130392 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 19:13:33.130359 1011483 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0809 19:13:33.130393 1011483 kapi.go:59] client config for pause-734678: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 19:13:33.132662 1011483 out.go:177] * Enabled addons: 
	I0809 19:13:33.131257 1011483 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:33.134254 1011483 addons.go:502] enable addons completed in 3.932609ms: enabled=[]
	I0809 19:13:33.134668 1011483 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-734678" context rescaled to 1 replicas
	I0809 19:13:33.134708 1011483 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 19:13:33.136299 1011483 out.go:177] * Verifying Kubernetes components...
	I0809 19:13:33.137857 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:33.213252 1011483 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0809 19:13:33.213254 1011483 node_ready.go:35] waiting up to 6m0s for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293435 1011483 node_ready.go:49] node "pause-734678" has status "Ready":"True"
	I0809 19:13:33.293459 1011483 node_ready.go:38] duration metric: took 80.174189ms waiting for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293468 1011483 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.495214 1011483 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:29.638753 1020043 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0809 19:13:29.638991 1020043 start.go:159] libmachine.API.Create for "kindnet-393336" (driver="docker")
	I0809 19:13:29.639017 1020043 client.go:168] LocalClient.Create starting
	I0809 19:13:29.639122 1020043 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 19:13:29.639162 1020043 main.go:141] libmachine: Decoding PEM data...
	I0809 19:13:29.639182 1020043 main.go:141] libmachine: Parsing certificate...
	I0809 19:13:29.639277 1020043 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 19:13:29.639314 1020043 main.go:141] libmachine: Decoding PEM data...
	I0809 19:13:29.639333 1020043 main.go:141] libmachine: Parsing certificate...
	I0809 19:13:29.639768 1020043 cli_runner.go:164] Run: docker network inspect kindnet-393336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0809 19:13:29.657004 1020043 cli_runner.go:211] docker network inspect kindnet-393336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0809 19:13:29.657074 1020043 network_create.go:281] running [docker network inspect kindnet-393336] to gather additional debugging logs...
	I0809 19:13:29.657094 1020043 cli_runner.go:164] Run: docker network inspect kindnet-393336
	W0809 19:13:29.675011 1020043 cli_runner.go:211] docker network inspect kindnet-393336 returned with exit code 1
	I0809 19:13:29.675048 1020043 network_create.go:284] error running [docker network inspect kindnet-393336]: docker network inspect kindnet-393336: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-393336 not found
	I0809 19:13:29.675089 1020043 network_create.go:286] output of [docker network inspect kindnet-393336]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-393336 not found
	
	** /stderr **
	I0809 19:13:29.675146 1020043 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 19:13:29.696301 1020043 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-29989c4702eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ad:8a:31:88} reservation:<nil>}
	I0809 19:13:29.697280 1020043 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f5f975ef181d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d8:4b:df:e2} reservation:<nil>}
	I0809 19:13:29.698709 1020043 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015ac140}
	I0809 19:13:29.698741 1020043 network_create.go:123] attempt to create docker network kindnet-393336 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0809 19:13:29.698806 1020043 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-393336 kindnet-393336
	I0809 19:13:29.759883 1020043 network_create.go:107] docker network kindnet-393336 192.168.67.0/24 created
	I0809 19:13:29.759924 1020043 kic.go:117] calculated static IP "192.168.67.2" for the "kindnet-393336" container
	I0809 19:13:29.759986 1020043 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 19:13:29.776340 1020043 cli_runner.go:164] Run: docker volume create kindnet-393336 --label name.minikube.sigs.k8s.io=kindnet-393336 --label created_by.minikube.sigs.k8s.io=true
	I0809 19:13:29.795573 1020043 oci.go:103] Successfully created a docker volume kindnet-393336
	I0809 19:13:29.795697 1020043 cli_runner.go:164] Run: docker run --rm --name kindnet-393336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-393336 --entrypoint /usr/bin/test -v kindnet-393336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 19:13:30.331350 1020043 oci.go:107] Successfully prepared a docker volume kindnet-393336
	I0809 19:13:30.331429 1020043 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:13:30.331459 1020043 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 19:13:30.331578 1020043 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-393336:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 19:13:33.893371 1011483 pod_ready.go:92] pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.964186 1011483 pod_ready.go:81] duration metric: took 468.932979ms waiting for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.964221 1011483 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293243 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.293267 1011483 pod_ready.go:81] duration metric: took 329.02896ms waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293279 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693726 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.693766 1011483 pod_ready.go:81] duration metric: took 400.47938ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693783 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093279 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.093303 1011483 pod_ready.go:81] duration metric: took 399.512359ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093313 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493610 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.493634 1011483 pod_ready.go:81] duration metric: took 400.315645ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493646 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893303 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.893328 1011483 pod_ready.go:81] duration metric: took 399.676794ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893339 1011483 pod_ready.go:38] duration metric: took 2.599855521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:35.893356 1011483 api_server.go:52] waiting for apiserver process to appear ...
	I0809 19:13:35.893413 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:35.906008 1011483 api_server.go:72] duration metric: took 2.771266372s to wait for apiserver process to appear ...
	I0809 19:13:35.906040 1011483 api_server.go:88] waiting for apiserver healthz status ...
	I0809 19:13:35.906061 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:35.911748 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0809 19:13:35.912824 1011483 api_server.go:141] control plane version: v1.27.4
	I0809 19:13:35.912846 1011483 api_server.go:131] duration metric: took 6.798164ms to wait for apiserver health ...
	I0809 19:13:35.912856 1011483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 19:13:36.097199 1011483 system_pods.go:59] 7 kube-system pods found
	I0809 19:13:36.097230 1011483 system_pods.go:61] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.097235 1011483 system_pods.go:61] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.097239 1011483 system_pods.go:61] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.097244 1011483 system_pods.go:61] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.097248 1011483 system_pods.go:61] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.097253 1011483 system_pods.go:61] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.097256 1011483 system_pods.go:61] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.097266 1011483 system_pods.go:74] duration metric: took 184.400786ms to wait for pod list to return data ...
	I0809 19:13:36.097275 1011483 default_sa.go:34] waiting for default service account to be created ...
	I0809 19:13:36.293083 1011483 default_sa.go:45] found service account: "default"
	I0809 19:13:36.293112 1011483 default_sa.go:55] duration metric: took 195.830656ms for default service account to be created ...
	I0809 19:13:36.293123 1011483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 19:13:36.501290 1011483 system_pods.go:86] 7 kube-system pods found
	I0809 19:13:36.501318 1011483 system_pods.go:89] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.501324 1011483 system_pods.go:89] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.501328 1011483 system_pods.go:89] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.501332 1011483 system_pods.go:89] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.501336 1011483 system_pods.go:89] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.501343 1011483 system_pods.go:89] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.501349 1011483 system_pods.go:89] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.501358 1011483 system_pods.go:126] duration metric: took 208.229085ms to wait for k8s-apps to be running ...
	I0809 19:13:36.501367 1011483 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 19:13:36.501418 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:36.533536 1011483 system_svc.go:56] duration metric: took 32.149726ms WaitForService to wait for kubelet.
	I0809 19:13:36.533577 1011483 kubeadm.go:581] duration metric: took 3.398838007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 19:13:36.533602 1011483 node_conditions.go:102] verifying NodePressure condition ...
	I0809 19:13:36.693814 1011483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 19:13:36.693838 1011483 node_conditions.go:123] node cpu capacity is 8
	I0809 19:13:36.693849 1011483 node_conditions.go:105] duration metric: took 160.242208ms to run NodePressure ...
	I0809 19:13:36.693859 1011483 start.go:228] waiting for startup goroutines ...
	I0809 19:13:36.693865 1011483 start.go:233] waiting for cluster config update ...
	I0809 19:13:36.693871 1011483 start.go:242] writing updated cluster config ...
	I0809 19:13:36.694238 1011483 ssh_runner.go:195] Run: rm -f paused
	I0809 19:13:36.762686 1011483 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0809 19:13:36.765170 1011483 out.go:177] * Done! kubectl is now configured to use "pause-734678" cluster and "default" namespace by default
	I0809 19:13:34.602256  997908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060359396s)
	W0809 19:13:34.602299  997908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0809 19:13:34.602311  997908 logs.go:123] Gathering logs for kube-apiserver [f6c6efaf9452f4c2a29c61099f9c9129531fa3999a073341e988ff5ee0d6b94d] ...
	I0809 19:13:34.602323  997908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6c6efaf9452f4c2a29c61099f9c9129531fa3999a073341e988ff5ee0d6b94d"
	I0809 19:13:34.642717  997908 logs.go:123] Gathering logs for kube-apiserver [e97b8a3ea12a71cf8984e6680c20bce8316826fce63431b2832bfff8f81a7e13] ...
	I0809 19:13:34.642754  997908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e97b8a3ea12a71cf8984e6680c20bce8316826fce63431b2832bfff8f81a7e13"
	
	* 
	* ==> CRI-O <==
	* Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.255225339Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0930f8838726042fda3b2d8b712144209a3492c5421d3d5e0221e976c62f4b3d/merged/etc/group: no such file or directory"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330187354Z" level=info msg="Created container 84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0: kube-system/kindnet-xxgzn/kindnet-cni" id=e714c164-5750-4cc0-b0d6-d031b6b2ff80 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330588725Z" level=info msg="Created container f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129: kube-system/kube-proxy-q25ss/kube-proxy" id=41c13cd0-e40d-41cf-8cfe-11701666d0de name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330821522Z" level=info msg="Starting container: 84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0" id=35630fa2-91ce-46f3-8464-e7067760ce6e name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.354810352Z" level=info msg="Starting container: f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129" id=71c1e0e7-0b94-47f5-a7d3-5278373fcb0a name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.358029133Z" level=info msg="Created container f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59: kube-system/coredns-5d78c9869d-zwnjn/coredns" id=e0049cd1-4019-42e0-a198-62f778e522b7 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.358644072Z" level=info msg="Starting container: f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59" id=843936d4-0fac-45eb-8a64-f46231c63427 name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.364244598Z" level=info msg="Started container" PID=4190 containerID=84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0 description=kube-system/kindnet-xxgzn/kindnet-cni id=35630fa2-91ce-46f3-8464-e7067760ce6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3345840237267ff07cd1adf95864b9bf4139b9ed6a1d79057f1ade8554548d9
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.365716916Z" level=info msg="Started container" PID=4200 containerID=f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129 description=kube-system/kube-proxy-q25ss/kube-proxy id=71c1e0e7-0b94-47f5-a7d3-5278373fcb0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a4f63fe91299bda7c87ceab088af23f72fc63df7d535bae813bae934ece015
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.370179696Z" level=info msg="Started container" PID=4197 containerID=f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59 description=kube-system/coredns-5d78c9869d-zwnjn/coredns id=843936d4-0fac-45eb-8a64-f46231c63427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b84b19cee96f1e695c1e86943773c0e8a9b43ae52c30af05ba93ec149416d8f1
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.759568768Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763420485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763453770Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763470492Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767157196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767189259Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767210576Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.770967757Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.770999537Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.856667728Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860690516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860725779Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860751958Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.864693013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.864731637Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f750628277d11       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   2                   b84b19cee96f1       coredns-5d78c9869d-zwnjn
	f2a29b737960d       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   18 seconds ago       Running             kube-proxy                2                   10a4f63fe9129       kube-proxy-q25ss
	84f64260cc001       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   18 seconds ago       Running             kindnet-cni               2                   b334584023726       kindnet-xxgzn
	153e7989592fa       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   22 seconds ago       Running             etcd                      3                   354b8567d20f9       etcd-pause-734678
	5f5b0a3042ab0       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   22 seconds ago       Running             kube-apiserver            2                   28e451f77eb1d       kube-apiserver-pause-734678
	68a9586fd8284       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   22 seconds ago       Running             kube-controller-manager   3                   5d850272d862c       kube-controller-manager-pause-734678
	049cef752e63c       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   37 seconds ago       Running             kube-scheduler            2                   96ff7fff17c06       kube-scheduler-pause-734678
	0f77a9a2da8b6       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   44 seconds ago       Exited              etcd                      2                   354b8567d20f9       etcd-pause-734678
	3b23f5302c00a       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   47 seconds ago       Exited              kube-proxy                1                   10a4f63fe9129       kube-proxy-q25ss
	4cdeeccf1e020       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   47 seconds ago       Exited              kindnet-cni               1                   b334584023726       kindnet-xxgzn
	bc3074123d41a       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   50 seconds ago       Exited              kube-controller-manager   2                   5d850272d862c       kube-controller-manager-pause-734678
	d222edca27c82       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   51 seconds ago       Exited              coredns                   1                   b84b19cee96f1       coredns-5d78c9869d-zwnjn
	f112396efdb77       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   55 seconds ago       Exited              kube-apiserver            1                   28e451f77eb1d       kube-apiserver-pause-734678
	d5c75acd23204       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   About a minute ago   Exited              kube-scheduler            1                   96ff7fff17c06       kube-scheduler-pause-734678
	
	* 
	* ==> coredns [d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42683 - 10087 "HINFO IN 34247377608641667.3491879901601238094. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.008825908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33514 - 27895 "HINFO IN 6356958996276243231.7236170660077127442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009720058s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-734678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-734678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=pause-734678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T19_11_42_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 19:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-734678
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 19:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-734678
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1ebfdb1444641878bca9c12f34a970b
	  System UUID:                bdc2ea85-c2e8-4868-b133-226ca5414fa8
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-zwnjn                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     104s
	  kube-system                 etcd-pause-734678                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-xxgzn                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      105s
	  kube-system                 kube-apiserver-pause-734678             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-pause-734678    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-q25ss                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-pause-734678             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  Starting                 40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node pause-734678 event: Registered Node pause-734678 in Controller
	  Normal  NodeReady                73s                  kubelet          Node pause-734678 status is now: NodeReady
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x8 over 23s)    kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                   node-controller  Node pause-734678 event: Registered Node pause-734678 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[ +16.130450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 18:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 19:01] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000006] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +1.024552] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +2.015754] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000007] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +4.223604] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +8.191193] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000029] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[Aug 9 19:03] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000005] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +1.014398] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +2.015765] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000022] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +4.031615] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000026] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +8.191161] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000005] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	
	* 
	* ==> etcd [0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9] <==
	* {"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T19:12:54.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-734678 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T19:12:54.620Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-08-09T19:12:54.620Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T19:12:58.106Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-09T19:12:58.106Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-734678","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-08-09T19:12:58.165Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-08-09T19:12:58.168Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:58.169Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:58.169Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-734678","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> etcd [153e7989592fa854ce1ba5baf4e16eb9ce317ad9f57784a948fd4f0351bf484d] <==
	* {"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T19:13:15.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-08-09T19:13:15.888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T19:13:15.888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-734678 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T19:13:17.361Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-08-09T19:13:17.361Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  19:13:38 up  2:56,  0 users,  load average: 3.67, 3.26, 2.28
	Linux pause-734678 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441] <==
	* I0809 19:12:50.660436       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0809 19:12:50.660485       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0809 19:12:50.660669       1 main.go:116] setting mtu 1500 for CNI 
	I0809 19:12:50.660688       1 main.go:146] kindnetd IP family: "ipv4"
	I0809 19:12:50.660705       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0809 19:12:57.955192       1 main.go:191] Failed to get nodes, retrying after error: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	I0809 19:12:57.956400       1 main.go:191] Failed to get nodes, retrying after error: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found]
	
	* 
	* ==> kindnet [84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0] <==
	* I0809 19:13:19.458574       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0809 19:13:19.458637       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0809 19:13:19.458870       1 main.go:116] setting mtu 1500 for CNI 
	I0809 19:13:19.458891       1 main.go:146] kindnetd IP family: "ipv4"
	I0809 19:13:19.458916       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0809 19:13:19.759308       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0809 19:13:19.759334       1 main.go:227] handling current node
	I0809 19:13:29.867750       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0809 19:13:29.867781       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5f5b0a3042ab0093ffe550869327a1890b14c6143f51f0e15fae74001562efc4] <==
	* I0809 19:13:18.500464       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0809 19:13:18.501479       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0809 19:13:18.501458       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0809 19:13:18.501469       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0809 19:13:18.503499       1 controller.go:83] Starting OpenAPI AggregationController
	I0809 19:13:18.676279       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 19:13:18.677847       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0809 19:13:18.677986       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 19:13:18.754653       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 19:13:18.754793       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0809 19:13:18.754815       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0809 19:13:18.754912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0809 19:13:18.754950       1 shared_informer.go:318] Caches are synced for configmaps
	I0809 19:13:18.755113       1 aggregator.go:152] initial CRD sync complete...
	I0809 19:13:18.755133       1 autoregister_controller.go:141] Starting autoregister controller
	I0809 19:13:18.755140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0809 19:13:18.755148       1 cache.go:39] Caches are synced for autoregister controller
	I0809 19:13:18.755305       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0809 19:13:19.253730       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 19:13:19.504043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0809 19:13:20.878673       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0809 19:13:20.971984       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0809 19:13:20.981283       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0809 19:13:21.029826       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 19:13:21.037797       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.655106       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.683337       1 logging.go:59] [core] [Channel #122 SubChannel #123] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.687837       1 logging.go:59] [core] [Channel #68 SubChannel #69] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [68a9586fd828423d665f37ecb232fa460fbd27ae96a48b6e0d39492ad016352e] <==
	* I0809 19:13:31.705065       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-734678"
	I0809 19:13:31.705193       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0809 19:13:31.707880       1 shared_informer.go:318] Caches are synced for node
	I0809 19:13:31.707967       1 range_allocator.go:174] "Sending events to api server"
	I0809 19:13:31.707920       1 shared_informer.go:318] Caches are synced for attach detach
	I0809 19:13:31.707994       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0809 19:13:31.708000       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0809 19:13:31.708007       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0809 19:13:31.709437       1 shared_informer.go:318] Caches are synced for PVC protection
	I0809 19:13:31.714511       1 shared_informer.go:318] Caches are synced for endpoint
	I0809 19:13:31.718409       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0809 19:13:31.745918       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0809 19:13:31.746106       1 shared_informer.go:318] Caches are synced for GC
	I0809 19:13:31.758283       1 shared_informer.go:318] Caches are synced for job
	I0809 19:13:31.810868       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0809 19:13:31.845420       1 shared_informer.go:318] Caches are synced for crt configmap
	I0809 19:13:31.846651       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0809 19:13:31.877519       1 shared_informer.go:318] Caches are synced for disruption
	I0809 19:13:31.889075       1 shared_informer.go:318] Caches are synced for deployment
	I0809 19:13:31.896167       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0809 19:13:31.936193       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 19:13:31.951726       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 19:13:32.281412       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 19:13:32.294951       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 19:13:32.294984       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2] <==
	* I0809 19:12:48.063047       1 serving.go:348] Generated self-signed cert in-memory
	I0809 19:12:48.272946       1 controllermanager.go:187] "Starting" version="v1.27.4"
	I0809 19:12:48.272971       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:12:48.273930       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0809 19:12:48.273954       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0809 19:12:48.274597       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0809 19:12:48.274664       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8] <==
	* I0809 19:12:57.961633       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0809 19:12:57.961917       1 server_others.go:110] "Detected node IP" address="192.168.85.2"
	I0809 19:12:57.961994       1 server_others.go:554] "Using iptables proxy"
	I0809 19:12:57.987270       1 server_others.go:192] "Using iptables Proxier"
	I0809 19:12:57.987316       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 19:12:57.987328       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 19:12:57.987347       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 19:12:57.987385       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 19:12:57.988336       1 server.go:658] "Version info" version="v1.27.4"
	I0809 19:12:57.988462       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:12:57.989609       1 config.go:188] "Starting service config controller"
	I0809 19:12:57.989633       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 19:12:57.989659       1 config.go:97] "Starting endpoint slice config controller"
	I0809 19:12:57.989662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 19:12:57.990058       1 config.go:315] "Starting node config controller"
	I0809 19:12:57.990070       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 19:12:58.089991       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 19:12:58.090013       1 shared_informer.go:318] Caches are synced for service config
	I0809 19:12:58.090126       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129] <==
	* I0809 19:13:19.407431       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0809 19:13:19.407515       1 server_others.go:110] "Detected node IP" address="192.168.85.2"
	I0809 19:13:19.407539       1 server_others.go:554] "Using iptables proxy"
	I0809 19:13:19.463291       1 server_others.go:192] "Using iptables Proxier"
	I0809 19:13:19.463319       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 19:13:19.463326       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 19:13:19.463339       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 19:13:19.463369       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 19:13:19.463999       1 server.go:658] "Version info" version="v1.27.4"
	I0809 19:13:19.464020       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:13:19.465559       1 config.go:97] "Starting endpoint slice config controller"
	I0809 19:13:19.465594       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 19:13:19.465629       1 config.go:188] "Starting service config controller"
	I0809 19:13:19.465638       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 19:13:19.465718       1 config.go:315] "Starting node config controller"
	I0809 19:13:19.465738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 19:13:19.565728       1 shared_informer.go:318] Caches are synced for service config
	I0809 19:13:19.565741       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 19:13:19.565790       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [049cef752e63c67d62b488bcea64bea577b7ad37b6a3f8003e40e6f410707a21] <==
	* E0809 19:13:18.680241       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 19:13:18.680302       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0809 19:13:18.680600       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.680624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.680736       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 19:13:18.680741       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 19:13:18.680750       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 19:13:18.680758       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0809 19:13:18.680834       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 19:13:18.681053       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0809 19:13:18.680909       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0809 19:13:18.681072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0809 19:13:18.680978       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681097       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681130       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681144       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681268       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0809 19:13:18.681287       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0809 19:13:18.681303       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681315       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681319       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0809 19:13:18.681330       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0809 19:13:18.754504       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 19:13:18.754545       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0809 19:13:20.972108       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4] <==
	* 
	* 
	* ==> kubelet <==
	* Aug 09 19:13:16 pause-734678 kubelet[3932]: I0809 19:13:16.464830    3932 kubelet_node_status.go:70] "Attempting to register node" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.761263    3932 kubelet_node_status.go:108] "Node was previously registered" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.761360    3932 kubelet_node_status.go:73] "Successfully registered node" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.762647    3932 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.763385    3932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.921160    3932 apiserver.go:52] "Watching apiserver"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.923964    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.924096    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.924152    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.927274    3932 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976258    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d08f14-790a-46be-87b1-032c144a76cb-lib-modules\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976380    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpzl\" (UniqueName: \"kubernetes.io/projected/7c939e8b-f847-44a3-984e-6276b66d3afc-kube-api-access-bdpzl\") pod \"coredns-5d78c9869d-zwnjn\" (UID: \"7c939e8b-f847-44a3-984e-6276b66d3afc\") " pod="kube-system/coredns-5d78c9869d-zwnjn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976462    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d08f14-790a-46be-87b1-032c144a76cb-xtables-lock\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976492    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-lib-modules\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976590    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23d08f14-790a-46be-87b1-032c144a76cb-kube-proxy\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976638    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-xtables-lock\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977210    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79th\" (UniqueName: \"kubernetes.io/projected/9265ba13-07f0-4c44-a920-74175ec0e07a-kube-api-access-s79th\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977311    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854kj\" (UniqueName: \"kubernetes.io/projected/23d08f14-790a-46be-87b1-032c144a76cb-kube-api-access-854kj\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977360    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-cni-cfg\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977400    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c939e8b-f847-44a3-984e-6276b66d3afc-config-volume\") pod \"coredns-5d78c9869d-zwnjn\" (UID: \"7c939e8b-f847-44a3-984e-6276b66d3afc\") " pod="kube-system/coredns-5d78c9869d-zwnjn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977426    3932 reconciler.go:41] "Reconciler: start to sync state"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225235    3932 scope.go:115] "RemoveContainer" containerID="d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225335    3932 scope.go:115] "RemoveContainer" containerID="3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225386    3932 scope.go:115] "RemoveContainer" containerID="4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441"
	Aug 09 19:13:21 pause-734678 kubelet[3932]: I0809 19:13:21.822192    3932 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-734678 -n pause-734678
helpers_test.go:261: (dbg) Run:  kubectl --context pause-734678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-734678
helpers_test.go:235: (dbg) docker inspect pause-734678:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af",
	        "Created": "2023-08-09T19:11:24.549859266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1001180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-09T19:11:24.846197287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:51eee4927f7e218e70017d38db072c77f0b6036bbfe389eac8043694e7529d58",
	        "ResolvConfPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/hostname",
	        "HostsPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/hosts",
	        "LogPath": "/var/lib/docker/containers/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af/455c5c1d8c5d4449aa59e19e9578b80ef8192cd8c71866255d281d670ae7a0af-json.log",
	        "Name": "/pause-734678",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-734678:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-734678",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb-init/diff:/var/lib/docker/overlay2/dffcbda35d4e6780372e77e03c9f976a612c164e3ac348da817dd7b6996e96fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bbd60dde32775251b9b53c046ecaebb1fa9752ef7e51a68d60310d84e3d59fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-734678",
	                "Source": "/var/lib/docker/volumes/pause-734678/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-734678",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-734678",
	                "name.minikube.sigs.k8s.io": "pause-734678",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f676634a700c3f44875cefcdbe71ad06cbcb8db26e7e22f71623fbbec48bb608",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33616"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33615"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33612"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33614"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33613"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f676634a700c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-734678": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "455c5c1d8c5d",
	                        "pause-734678"
	                    ],
	                    "NetworkID": "8e065b7d722331af1dd6c2f0d877c8db09a553617a646a1b3a8e8b1b15ce4d3a",
	                    "EndpointID": "d0ab2d877dcb4ed4c0260ff81533f81e4b3216644fcf039454aa2ee86965348b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-734678 -n pause-734678
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-734678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-734678 logs -n 25: (1.470737402s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-393336 sudo crictl                           | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | pods                                                 |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo crictl ps                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | --all                                                |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo find                             | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/cni -type f -exec sh -c                         |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo ip a s                           | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	| ssh     | -p auto-393336 sudo ip r s                           | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	| ssh     | -p auto-393336 sudo                                  | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo iptables                         | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo journalctl                       | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo docker                           | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo                                  | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo systemctl                        | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | cat containerd --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC | 09 Aug 23 19:13 UTC |
	|         | /lib/systemd/system/containerd.service               |             |         |         |                     |                     |
	| ssh     | -p auto-393336 sudo cat                              | auto-393336 | jenkins | v1.31.1 | 09 Aug 23 19:13 UTC |                     |
	|         | /etc/containerd/config.toml                          |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 19:13:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 19:13:29.413362 1020043 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:13:29.413500 1020043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:13:29.413509 1020043 out.go:309] Setting ErrFile to fd 2...
	I0809 19:13:29.413514 1020043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:13:29.413707 1020043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:13:29.414312 1020043 out.go:303] Setting JSON to false
	I0809 19:13:29.425334 1020043 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10565,"bootTime":1691597845,"procs":843,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:13:29.425427 1020043 start.go:138] virtualization: kvm guest
	I0809 19:13:29.427788 1020043 out.go:177] * [kindnet-393336] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:13:29.429355 1020043 notify.go:220] Checking for updates...
	I0809 19:13:29.430699 1020043 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:13:29.434523 1020043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:13:29.435914 1020043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:13:29.437283 1020043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:13:29.438542 1020043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:13:29.440138 1020043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:13:29.442000 1020043 config.go:182] Loaded profile config "auto-393336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:29.442109 1020043 config.go:182] Loaded profile config "kubernetes-upgrade-222913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.0
	I0809 19:13:29.442226 1020043 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:29.442310 1020043 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:13:29.467570 1020043 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:13:29.467693 1020043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:13:29.528233 1020043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:13:29.517937089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:13:29.528439 1020043 docker.go:294] overlay module found
	I0809 19:13:29.531005 1020043 out.go:177] * Using the docker driver based on user configuration
	I0809 19:13:29.532334 1020043 start.go:298] selected driver: docker
	I0809 19:13:29.532350 1020043 start.go:901] validating driver "docker" against <nil>
	I0809 19:13:29.532363 1020043 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:13:29.533282 1020043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:13:29.609805 1020043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-09 19:13:29.600660145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:13:29.609982 1020043 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 19:13:29.610267 1020043 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0809 19:13:29.611994 1020043 out.go:177] * Using Docker driver with root privileges
	I0809 19:13:29.613215 1020043 cni.go:84] Creating CNI manager for "kindnet"
	I0809 19:13:29.613242 1020043 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0809 19:13:29.613255 1020043 start_flags.go:319] config:
	{Name:kindnet-393336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-393336 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 19:13:29.614735 1020043 out.go:177] * Starting control plane node kindnet-393336 in cluster kindnet-393336
	I0809 19:13:29.615859 1020043 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 19:13:29.617057 1020043 out.go:177] * Pulling base image ...
	I0809 19:13:29.618166 1020043 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:13:29.618214 1020043 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 19:13:29.618222 1020043 cache.go:57] Caching tarball of preloaded images
	I0809 19:13:29.618284 1020043 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 19:13:29.618351 1020043 preload.go:174] Found /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0809 19:13:29.618367 1020043 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0809 19:13:29.618519 1020043 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/config.json ...
	I0809 19:13:29.618542 1020043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/config.json: {Name:mk1eb5b3166e5455a245a78e2a4f67ed67296e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:29.636675 1020043 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon, skipping pull
	I0809 19:13:29.636707 1020043 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in daemon, skipping load
	I0809 19:13:29.636726 1020043 cache.go:195] Successfully downloaded all kic artifacts
	I0809 19:13:29.636779 1020043 start.go:365] acquiring machines lock for kindnet-393336: {Name:mkb40a2131763f1ac0cb1dbeabdd4af29bdfcfa0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0809 19:13:29.636897 1020043 start.go:369] acquired machines lock for "kindnet-393336" in 94.819µs
	I0809 19:13:29.636929 1020043 start.go:93] Provisioning new machine with config: &{Name:kindnet-393336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:kindnet-393336 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 19:13:29.637040 1020043 start.go:125] createHost starting for "" (driver="docker")
	I0809 19:13:29.094464 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:31.095010 1011483 pod_ready.go:102] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"False"
	I0809 19:13:33.095618 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.095672 1011483 pod_ready.go:81] duration metric: took 11.022251336s waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.095696 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101219 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.101243 1011483 pod_ready.go:81] duration metric: took 5.531107ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.101256 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107554 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.107578 1011483 pod_ready.go:81] duration metric: took 6.313562ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.107591 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.112970 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.112993 1011483 pod_ready.go:81] duration metric: took 5.393412ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.113005 1011483 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118913 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.118936 1011483 pod_ready.go:81] duration metric: took 5.923321ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.118945 1011483 pod_ready.go:38] duration metric: took 12.068019318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.118968 1011483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0809 19:13:33.126814 1011483 ops.go:34] apiserver oom_adj: -16
	I0809 19:13:33.126837 1011483 kubeadm.go:640] restartCluster took 55.162938995s
	I0809 19:13:33.126844 1011483 kubeadm.go:406] StartCluster complete in 55.233934514s
	I0809 19:13:33.126858 1011483 settings.go:142] acquiring lock: {Name:mk873daac26ba3897eede1f5f8e0b40f2c63510f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.126931 1011483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:13:33.128886 1011483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17011-816603/kubeconfig: {Name:mk4f98edb5dc8df50bdb1180a23f12dadd75d59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0809 19:13:33.130392 1011483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0809 19:13:33.130359 1011483 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0809 19:13:33.130393 1011483 kapi.go:59] client config for pause-734678: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/profiles/pause-734678/client.key", CAFile:"/home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d27100), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0809 19:13:33.132662 1011483 out.go:177] * Enabled addons: 
	I0809 19:13:33.131257 1011483 config.go:182] Loaded profile config "pause-734678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:33.134254 1011483 addons.go:502] enable addons completed in 3.932609ms: enabled=[]
	I0809 19:13:33.134668 1011483 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-734678" context rescaled to 1 replicas
	I0809 19:13:33.134708 1011483 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0809 19:13:33.136299 1011483 out.go:177] * Verifying Kubernetes components...
	I0809 19:13:33.137857 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:33.213252 1011483 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0809 19:13:33.213254 1011483 node_ready.go:35] waiting up to 6m0s for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293435 1011483 node_ready.go:49] node "pause-734678" has status "Ready":"True"
	I0809 19:13:33.293459 1011483 node_ready.go:38] duration metric: took 80.174189ms waiting for node "pause-734678" to be "Ready" ...
	I0809 19:13:33.293468 1011483 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:33.495214 1011483 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:29.638753 1020043 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0809 19:13:29.638991 1020043 start.go:159] libmachine.API.Create for "kindnet-393336" (driver="docker")
	I0809 19:13:29.639017 1020043 client.go:168] LocalClient.Create starting
	I0809 19:13:29.639122 1020043 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem
	I0809 19:13:29.639162 1020043 main.go:141] libmachine: Decoding PEM data...
	I0809 19:13:29.639182 1020043 main.go:141] libmachine: Parsing certificate...
	I0809 19:13:29.639277 1020043 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem
	I0809 19:13:29.639314 1020043 main.go:141] libmachine: Decoding PEM data...
	I0809 19:13:29.639333 1020043 main.go:141] libmachine: Parsing certificate...
	I0809 19:13:29.639768 1020043 cli_runner.go:164] Run: docker network inspect kindnet-393336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0809 19:13:29.657004 1020043 cli_runner.go:211] docker network inspect kindnet-393336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0809 19:13:29.657074 1020043 network_create.go:281] running [docker network inspect kindnet-393336] to gather additional debugging logs...
	I0809 19:13:29.657094 1020043 cli_runner.go:164] Run: docker network inspect kindnet-393336
	W0809 19:13:29.675011 1020043 cli_runner.go:211] docker network inspect kindnet-393336 returned with exit code 1
	I0809 19:13:29.675048 1020043 network_create.go:284] error running [docker network inspect kindnet-393336]: docker network inspect kindnet-393336: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-393336 not found
	I0809 19:13:29.675089 1020043 network_create.go:286] output of [docker network inspect kindnet-393336]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-393336 not found
	
	** /stderr **
	I0809 19:13:29.675146 1020043 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0809 19:13:29.696301 1020043 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-29989c4702eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ad:8a:31:88} reservation:<nil>}
	I0809 19:13:29.697280 1020043 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f5f975ef181d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d8:4b:df:e2} reservation:<nil>}
	I0809 19:13:29.698709 1020043 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015ac140}
	I0809 19:13:29.698741 1020043 network_create.go:123] attempt to create docker network kindnet-393336 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0809 19:13:29.698806 1020043 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-393336 kindnet-393336
	I0809 19:13:29.759883 1020043 network_create.go:107] docker network kindnet-393336 192.168.67.0/24 created
	I0809 19:13:29.759924 1020043 kic.go:117] calculated static IP "192.168.67.2" for the "kindnet-393336" container
	I0809 19:13:29.759986 1020043 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0809 19:13:29.776340 1020043 cli_runner.go:164] Run: docker volume create kindnet-393336 --label name.minikube.sigs.k8s.io=kindnet-393336 --label created_by.minikube.sigs.k8s.io=true
	I0809 19:13:29.795573 1020043 oci.go:103] Successfully created a docker volume kindnet-393336
	I0809 19:13:29.795697 1020043 cli_runner.go:164] Run: docker run --rm --name kindnet-393336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-393336 --entrypoint /usr/bin/test -v kindnet-393336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -d /var/lib
	I0809 19:13:30.331350 1020043 oci.go:107] Successfully prepared a docker volume kindnet-393336
	I0809 19:13:30.331429 1020043 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 19:13:30.331459 1020043 kic.go:190] Starting extracting preloaded images to volume ...
	I0809 19:13:30.331578 1020043 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-393336:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir
	I0809 19:13:33.893371 1011483 pod_ready.go:92] pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:33.964186 1011483 pod_ready.go:81] duration metric: took 468.932979ms waiting for pod "coredns-5d78c9869d-zwnjn" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:33.964221 1011483 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293243 1011483 pod_ready.go:92] pod "etcd-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.293267 1011483 pod_ready.go:81] duration metric: took 329.02896ms waiting for pod "etcd-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.293279 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693726 1011483 pod_ready.go:92] pod "kube-apiserver-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:34.693766 1011483 pod_ready.go:81] duration metric: took 400.47938ms waiting for pod "kube-apiserver-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:34.693783 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093279 1011483 pod_ready.go:92] pod "kube-controller-manager-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.093303 1011483 pod_ready.go:81] duration metric: took 399.512359ms waiting for pod "kube-controller-manager-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.093313 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493610 1011483 pod_ready.go:92] pod "kube-proxy-q25ss" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.493634 1011483 pod_ready.go:81] duration metric: took 400.315645ms waiting for pod "kube-proxy-q25ss" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.493646 1011483 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893303 1011483 pod_ready.go:92] pod "kube-scheduler-pause-734678" in "kube-system" namespace has status "Ready":"True"
	I0809 19:13:35.893328 1011483 pod_ready.go:81] duration metric: took 399.676794ms waiting for pod "kube-scheduler-pause-734678" in "kube-system" namespace to be "Ready" ...
	I0809 19:13:35.893339 1011483 pod_ready.go:38] duration metric: took 2.599855521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0809 19:13:35.893356 1011483 api_server.go:52] waiting for apiserver process to appear ...
	I0809 19:13:35.893413 1011483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:13:35.906008 1011483 api_server.go:72] duration metric: took 2.771266372s to wait for apiserver process to appear ...
	I0809 19:13:35.906040 1011483 api_server.go:88] waiting for apiserver healthz status ...
	I0809 19:13:35.906061 1011483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0809 19:13:35.911748 1011483 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0809 19:13:35.912824 1011483 api_server.go:141] control plane version: v1.27.4
	I0809 19:13:35.912846 1011483 api_server.go:131] duration metric: took 6.798164ms to wait for apiserver health ...
	I0809 19:13:35.912856 1011483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0809 19:13:36.097199 1011483 system_pods.go:59] 7 kube-system pods found
	I0809 19:13:36.097230 1011483 system_pods.go:61] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.097235 1011483 system_pods.go:61] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.097239 1011483 system_pods.go:61] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.097244 1011483 system_pods.go:61] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.097248 1011483 system_pods.go:61] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.097253 1011483 system_pods.go:61] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.097256 1011483 system_pods.go:61] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.097266 1011483 system_pods.go:74] duration metric: took 184.400786ms to wait for pod list to return data ...
	I0809 19:13:36.097275 1011483 default_sa.go:34] waiting for default service account to be created ...
	I0809 19:13:36.293083 1011483 default_sa.go:45] found service account: "default"
	I0809 19:13:36.293112 1011483 default_sa.go:55] duration metric: took 195.830656ms for default service account to be created ...
	I0809 19:13:36.293123 1011483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0809 19:13:36.501290 1011483 system_pods.go:86] 7 kube-system pods found
	I0809 19:13:36.501318 1011483 system_pods.go:89] "coredns-5d78c9869d-zwnjn" [7c939e8b-f847-44a3-984e-6276b66d3afc] Running
	I0809 19:13:36.501324 1011483 system_pods.go:89] "etcd-pause-734678" [273b12a3-5c11-4e4e-9d26-b9c102acd768] Running
	I0809 19:13:36.501328 1011483 system_pods.go:89] "kindnet-xxgzn" [9265ba13-07f0-4c44-a920-74175ec0e07a] Running
	I0809 19:13:36.501332 1011483 system_pods.go:89] "kube-apiserver-pause-734678" [29c446c4-0a22-46f2-b796-b3d0207f125f] Running
	I0809 19:13:36.501336 1011483 system_pods.go:89] "kube-controller-manager-pause-734678" [2405dfa9-ff6f-448f-921c-4e963bae6ab8] Running
	I0809 19:13:36.501343 1011483 system_pods.go:89] "kube-proxy-q25ss" [23d08f14-790a-46be-87b1-032c144a76cb] Running
	I0809 19:13:36.501349 1011483 system_pods.go:89] "kube-scheduler-pause-734678" [6e0a6dd5-f76d-4b78-8df9-f088f418a79e] Running
	I0809 19:13:36.501358 1011483 system_pods.go:126] duration metric: took 208.229085ms to wait for k8s-apps to be running ...
	I0809 19:13:36.501367 1011483 system_svc.go:44] waiting for kubelet service to be running ....
	I0809 19:13:36.501418 1011483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:13:36.533536 1011483 system_svc.go:56] duration metric: took 32.149726ms WaitForService to wait for kubelet.
	I0809 19:13:36.533577 1011483 kubeadm.go:581] duration metric: took 3.398838007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0809 19:13:36.533602 1011483 node_conditions.go:102] verifying NodePressure condition ...
	I0809 19:13:36.693814 1011483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0809 19:13:36.693838 1011483 node_conditions.go:123] node cpu capacity is 8
	I0809 19:13:36.693849 1011483 node_conditions.go:105] duration metric: took 160.242208ms to run NodePressure ...
	I0809 19:13:36.693859 1011483 start.go:228] waiting for startup goroutines ...
	I0809 19:13:36.693865 1011483 start.go:233] waiting for cluster config update ...
	I0809 19:13:36.693871 1011483 start.go:242] writing updated cluster config ...
	I0809 19:13:36.694238 1011483 ssh_runner.go:195] Run: rm -f paused
	I0809 19:13:36.762686 1011483 start.go:599] kubectl: 1.27.4, cluster: 1.27.4 (minor skew: 0)
	I0809 19:13:36.765170 1011483 out.go:177] * Done! kubectl is now configured to use "pause-734678" cluster and "default" namespace by default
	I0809 19:13:34.602256  997908 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060359396s)
	W0809 19:13:34.602299  997908 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0809 19:13:34.602311  997908 logs.go:123] Gathering logs for kube-apiserver [f6c6efaf9452f4c2a29c61099f9c9129531fa3999a073341e988ff5ee0d6b94d] ...
	I0809 19:13:34.602323  997908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6c6efaf9452f4c2a29c61099f9c9129531fa3999a073341e988ff5ee0d6b94d"
	I0809 19:13:34.642717  997908 logs.go:123] Gathering logs for kube-apiserver [e97b8a3ea12a71cf8984e6680c20bce8316826fce63431b2832bfff8f81a7e13] ...
	I0809 19:13:34.642754  997908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e97b8a3ea12a71cf8984e6680c20bce8316826fce63431b2832bfff8f81a7e13"
	I0809 19:13:35.617230 1020043 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-393336:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 -I lz4 -xf /preloaded.tar -C /extractDir: (5.285556319s)
	I0809 19:13:35.617263 1020043 kic.go:199] duration metric: took 5.285801 seconds to extract preloaded images to volume
	W0809 19:13:35.617428 1020043 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0809 19:13:35.617543 1020043 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0809 19:13:35.676568 1020043 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-393336 --name kindnet-393336 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-393336 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-393336 --network kindnet-393336 --ip 192.168.67.2 --volume kindnet-393336:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37
	I0809 19:13:35.999021 1020043 cli_runner.go:164] Run: docker container inspect kindnet-393336 --format={{.State.Running}}
	I0809 19:13:36.021398 1020043 cli_runner.go:164] Run: docker container inspect kindnet-393336 --format={{.State.Status}}
	I0809 19:13:36.041913 1020043 cli_runner.go:164] Run: docker exec kindnet-393336 stat /var/lib/dpkg/alternatives/iptables
	I0809 19:13:36.125929 1020043 oci.go:144] the created container "kindnet-393336" has a running status.
	I0809 19:13:36.125962 1020043 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa...
	I0809 19:13:36.389629 1020043 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0809 19:13:36.420632 1020043 cli_runner.go:164] Run: docker container inspect kindnet-393336 --format={{.State.Status}}
	I0809 19:13:36.448814 1020043 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0809 19:13:36.448834 1020043 kic_runner.go:114] Args: [docker exec --privileged kindnet-393336 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0809 19:13:36.566576 1020043 cli_runner.go:164] Run: docker container inspect kindnet-393336 --format={{.State.Status}}
	I0809 19:13:36.589534 1020043 machine.go:88] provisioning docker machine ...
	I0809 19:13:36.589580 1020043 ubuntu.go:169] provisioning hostname "kindnet-393336"
	I0809 19:13:36.589651 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:36.613268 1020043 main.go:141] libmachine: Using SSH client type: native
	I0809 19:13:36.613746 1020043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0809 19:13:36.613765 1020043 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-393336 && echo "kindnet-393336" | sudo tee /etc/hostname
	I0809 19:13:36.826622 1020043 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-393336
	
	I0809 19:13:36.826699 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:36.849364 1020043 main.go:141] libmachine: Using SSH client type: native
	I0809 19:13:36.850066 1020043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0809 19:13:36.850111 1020043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-393336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-393336/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-393336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0809 19:13:36.996508 1020043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0809 19:13:36.996532 1020043 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17011-816603/.minikube CaCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17011-816603/.minikube}
	I0809 19:13:36.996553 1020043 ubuntu.go:177] setting up certificates
	I0809 19:13:36.996563 1020043 provision.go:83] configureAuth start
	I0809 19:13:36.996620 1020043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-393336
	I0809 19:13:37.016306 1020043 provision.go:138] copyHostCerts
	I0809 19:13:37.016371 1020043 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem, removing ...
	I0809 19:13:37.016382 1020043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem
	I0809 19:13:37.016452 1020043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/ca.pem (1082 bytes)
	I0809 19:13:37.016561 1020043 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem, removing ...
	I0809 19:13:37.016572 1020043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem
	I0809 19:13:37.016610 1020043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/cert.pem (1123 bytes)
	I0809 19:13:37.016693 1020043 exec_runner.go:144] found /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem, removing ...
	I0809 19:13:37.016707 1020043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem
	I0809 19:13:37.016742 1020043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17011-816603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17011-816603/.minikube/key.pem (1679 bytes)
	I0809 19:13:37.016813 1020043 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca-key.pem org=jenkins.kindnet-393336 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-393336]
	I0809 19:13:37.238793 1020043 provision.go:172] copyRemoteCerts
	I0809 19:13:37.238846 1020043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0809 19:13:37.238882 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:37.256652 1020043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa Username:docker}
	I0809 19:13:37.358484 1020043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0809 19:13:37.385105 1020043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0809 19:13:37.410291 1020043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0809 19:13:37.434183 1020043 provision.go:86] duration metric: configureAuth took 437.603116ms
	I0809 19:13:37.434217 1020043 ubuntu.go:193] setting minikube options for container-runtime
	I0809 19:13:37.434412 1020043 config.go:182] Loaded profile config "kindnet-393336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:13:37.434534 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:37.451876 1020043 main.go:141] libmachine: Using SSH client type: native
	I0809 19:13:37.452282 1020043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ef00] 0x811fa0 <nil>  [] 0s} 127.0.0.1 33629 <nil> <nil>}
	I0809 19:13:37.452309 1020043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0809 19:13:37.727848 1020043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0809 19:13:37.727879 1020043 machine.go:91] provisioned docker machine in 1.138315299s
	I0809 19:13:37.727891 1020043 client.go:171] LocalClient.Create took 8.088868131s
	I0809 19:13:37.727909 1020043 start.go:167] duration metric: libmachine.API.Create for "kindnet-393336" took 8.088917229s
	I0809 19:13:37.727918 1020043 start.go:300] post-start starting for "kindnet-393336" (driver="docker")
	I0809 19:13:37.727931 1020043 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0809 19:13:37.728009 1020043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0809 19:13:37.728057 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:37.751088 1020043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa Username:docker}
	I0809 19:13:37.857216 1020043 ssh_runner.go:195] Run: cat /etc/os-release
	I0809 19:13:37.860882 1020043 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0809 19:13:37.860911 1020043 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0809 19:13:37.860921 1020043 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0809 19:13:37.860927 1020043 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0809 19:13:37.860942 1020043 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/addons for local assets ...
	I0809 19:13:37.861008 1020043 filesync.go:126] Scanning /home/jenkins/minikube-integration/17011-816603/.minikube/files for local assets ...
	I0809 19:13:37.861102 1020043 filesync.go:149] local asset: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem -> 8234342.pem in /etc/ssl/certs
	I0809 19:13:37.861224 1020043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0809 19:13:37.871193 1020043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/ssl/certs/8234342.pem --> /etc/ssl/certs/8234342.pem (1708 bytes)
	I0809 19:13:37.901972 1020043 start.go:303] post-start completed in 174.039843ms
	I0809 19:13:37.902413 1020043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-393336
	I0809 19:13:37.925940 1020043 profile.go:148] Saving config to /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/config.json ...
	I0809 19:13:37.926184 1020043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:13:37.926228 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:37.947808 1020043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa Username:docker}
	I0809 19:13:38.045498 1020043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0809 19:13:38.050252 1020043 start.go:128] duration metric: createHost completed in 8.413196273s
	I0809 19:13:38.050273 1020043 start.go:83] releasing machines lock for "kindnet-393336", held for 8.413362168s
	I0809 19:13:38.050339 1020043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-393336
	I0809 19:13:38.070851 1020043 ssh_runner.go:195] Run: cat /version.json
	I0809 19:13:38.070906 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:38.071116 1020043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0809 19:13:38.071173 1020043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-393336
	I0809 19:13:38.092609 1020043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa Username:docker}
	I0809 19:13:38.095454 1020043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33629 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/kindnet-393336/id_rsa Username:docker}
	I0809 19:13:38.300461 1020043 ssh_runner.go:195] Run: systemctl --version
	I0809 19:13:38.305694 1020043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0809 19:13:38.463158 1020043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0809 19:13:38.467568 1020043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:13:38.488407 1020043 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0809 19:13:38.488514 1020043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0809 19:13:38.520430 1020043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0809 19:13:38.520451 1020043 start.go:466] detecting cgroup driver to use...
	I0809 19:13:38.520493 1020043 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0809 19:13:38.520537 1020043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0809 19:13:38.541749 1020043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0809 19:13:38.555238 1020043 docker.go:196] disabling cri-docker service (if available) ...
	I0809 19:13:38.555290 1020043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0809 19:13:38.572244 1020043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0809 19:13:38.587734 1020043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0809 19:13:38.683596 1020043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0809 19:13:38.778393 1020043 docker.go:212] disabling docker service ...
	I0809 19:13:38.778444 1020043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0809 19:13:38.803024 1020043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0809 19:13:38.816385 1020043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0809 19:13:38.911306 1020043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0809 19:13:39.008446 1020043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0809 19:13:39.020872 1020043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0809 19:13:39.036880 1020043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0809 19:13:39.036933 1020043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:13:39.046028 1020043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0809 19:13:39.046086 1020043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:13:39.057366 1020043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:13:39.067860 1020043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0809 19:13:39.079851 1020043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0809 19:13:39.089608 1020043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0809 19:13:39.098062 1020043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0809 19:13:39.106752 1020043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0809 19:13:39.197801 1020043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0809 19:13:39.323378 1020043 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0809 19:13:39.323439 1020043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0809 19:13:39.327712 1020043 start.go:534] Will wait 60s for crictl version
	I0809 19:13:39.327768 1020043 ssh_runner.go:195] Run: which crictl
	I0809 19:13:39.331607 1020043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0809 19:13:39.372166 1020043 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0809 19:13:39.372255 1020043 ssh_runner.go:195] Run: crio --version
	I0809 19:13:39.408448 1020043 ssh_runner.go:195] Run: crio --version
	I0809 19:13:39.447257 1020043 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	
	* 
	* ==> CRI-O <==
	* Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.255225339Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0930f8838726042fda3b2d8b712144209a3492c5421d3d5e0221e976c62f4b3d/merged/etc/group: no such file or directory"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330187354Z" level=info msg="Created container 84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0: kube-system/kindnet-xxgzn/kindnet-cni" id=e714c164-5750-4cc0-b0d6-d031b6b2ff80 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330588725Z" level=info msg="Created container f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129: kube-system/kube-proxy-q25ss/kube-proxy" id=41c13cd0-e40d-41cf-8cfe-11701666d0de name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.330821522Z" level=info msg="Starting container: 84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0" id=35630fa2-91ce-46f3-8464-e7067760ce6e name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.354810352Z" level=info msg="Starting container: f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129" id=71c1e0e7-0b94-47f5-a7d3-5278373fcb0a name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.358029133Z" level=info msg="Created container f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59: kube-system/coredns-5d78c9869d-zwnjn/coredns" id=e0049cd1-4019-42e0-a198-62f778e522b7 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.358644072Z" level=info msg="Starting container: f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59" id=843936d4-0fac-45eb-8a64-f46231c63427 name=/runtime.v1.RuntimeService/StartContainer
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.364244598Z" level=info msg="Started container" PID=4190 containerID=84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0 description=kube-system/kindnet-xxgzn/kindnet-cni id=35630fa2-91ce-46f3-8464-e7067760ce6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3345840237267ff07cd1adf95864b9bf4139b9ed6a1d79057f1ade8554548d9
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.365716916Z" level=info msg="Started container" PID=4200 containerID=f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129 description=kube-system/kube-proxy-q25ss/kube-proxy id=71c1e0e7-0b94-47f5-a7d3-5278373fcb0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a4f63fe91299bda7c87ceab088af23f72fc63df7d535bae813bae934ece015
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.370179696Z" level=info msg="Started container" PID=4197 containerID=f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59 description=kube-system/coredns-5d78c9869d-zwnjn/coredns id=843936d4-0fac-45eb-8a64-f46231c63427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b84b19cee96f1e695c1e86943773c0e8a9b43ae52c30af05ba93ec149416d8f1
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.759568768Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763420485Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763453770Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.763470492Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767157196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767189259Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.767210576Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.770967757Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.770999537Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.856667728Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860690516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860725779Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.860751958Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.864693013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 09 19:13:19 pause-734678 crio[2855]: time="2023-08-09 19:13:19.864731637Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f750628277d11       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago       Running             coredns                   2                   b84b19cee96f1       coredns-5d78c9869d-zwnjn
	f2a29b737960d       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   20 seconds ago       Running             kube-proxy                2                   10a4f63fe9129       kube-proxy-q25ss
	84f64260cc001       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   20 seconds ago       Running             kindnet-cni               2                   b334584023726       kindnet-xxgzn
	153e7989592fa       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   24 seconds ago       Running             etcd                      3                   354b8567d20f9       etcd-pause-734678
	5f5b0a3042ab0       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   24 seconds ago       Running             kube-apiserver            2                   28e451f77eb1d       kube-apiserver-pause-734678
	68a9586fd8284       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   24 seconds ago       Running             kube-controller-manager   3                   5d850272d862c       kube-controller-manager-pause-734678
	049cef752e63c       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   39 seconds ago       Running             kube-scheduler            2                   96ff7fff17c06       kube-scheduler-pause-734678
	0f77a9a2da8b6       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   46 seconds ago       Exited              etcd                      2                   354b8567d20f9       etcd-pause-734678
	3b23f5302c00a       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   49 seconds ago       Exited              kube-proxy                1                   10a4f63fe9129       kube-proxy-q25ss
	4cdeeccf1e020       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   49 seconds ago       Exited              kindnet-cni               1                   b334584023726       kindnet-xxgzn
	bc3074123d41a       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   52 seconds ago       Exited              kube-controller-manager   2                   5d850272d862c       kube-controller-manager-pause-734678
	d222edca27c82       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   54 seconds ago       Exited              coredns                   1                   b84b19cee96f1       coredns-5d78c9869d-zwnjn
	f112396efdb77       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   58 seconds ago       Exited              kube-apiserver            1                   28e451f77eb1d       kube-apiserver-pause-734678
	d5c75acd23204       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   About a minute ago   Exited              kube-scheduler            1                   96ff7fff17c06       kube-scheduler-pause-734678
	
	* 
	* ==> coredns [d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42683 - 10087 "HINFO IN 34247377608641667.3491879901601238094. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.008825908s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f750628277d1187aae33f70880684a8297a9fb5d80845e441910a6aed0243f59] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33514 - 27895 "HINFO IN 6356958996276243231.7236170660077127442. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009720058s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-734678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-734678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e286a113bb5db20a65222adef757d15268cdbb1a
	                    minikube.k8s.io/name=pause-734678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_09T19_11_42_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Aug 2023 19:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-734678
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Aug 2023 19:13:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Aug 2023 19:13:18 +0000   Wed, 09 Aug 2023 19:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-734678
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1ebfdb1444641878bca9c12f34a970b
	  System UUID:                bdc2ea85-c2e8-4868-b133-226ca5414fa8
	  Boot ID:                    ea1f61fe-b434-46c1-afe7-153d4b2d65ef
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-zwnjn                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     106s
	  kube-system                 etcd-pause-734678                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kindnet-xxgzn                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      107s
	  kube-system                 kube-apiserver-pause-734678             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-pause-734678    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-proxy-q25ss                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-pause-734678             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 20s                  kube-proxy       
	  Normal  Starting                 42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node pause-734678 event: Registered Node pause-734678 in Controller
	  Normal  NodeReady                75s                  kubelet          Node pause-734678 status is now: NodeReady
	  Normal  Starting                 26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-734678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-734678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x8 over 25s)    kubelet          Node pause-734678 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                   node-controller  Node pause-734678 event: Registered Node pause-734678 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[ +16.130450] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 18:51] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 4e 93 4f 7d 89 d1 46 70 9f b9 c1 47 08 00
	[Aug 9 19:01] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000006] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +1.024552] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +2.015754] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000007] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +4.223604] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +8.191193] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f5f975ef181d
	[  +0.000029] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[Aug 9 19:03] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000005] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +1.014398] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000023] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +2.015765] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000022] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +4.031615] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000026] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	[  +8.191161] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-f5f975ef181d
	[  +0.000005] ll header: 00000000: 02 42 d8 4b df e2 02 42 c0 a8 3a 02 08 00
	
	* 
	* ==> etcd [0f77a9a2da8b6d363a14c0b5d40080f9af67c6640d2822a43e82eb09bad297d9] <==
	* {"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T19:12:53.624Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T19:12:54.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-734678 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T19:12:54.618Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T19:12:54.620Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-08-09T19:12:54.620Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-09T19:12:58.106Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-09T19:12:58.106Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-734678","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"info","ts":"2023-08-09T19:12:58.165Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2023-08-09T19:12:58.168Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:58.169Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:12:58.169Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-734678","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	* 
	* ==> etcd [153e7989592fa854ce1ba5baf4e16eb9ce317ad9f57784a948fd4f0351bf484d] <==
	* {"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-09T19:13:15.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2023-08-09T19:13:15.887Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2023-08-09T19:13:15.888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T19:13:15.888Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:13:15.898Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-734678 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-09T19:13:17.360Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-09T19:13:17.361Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2023-08-09T19:13:17.361Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  19:13:40 up  2:56,  0 users,  load average: 3.70, 3.27, 2.29
	Linux pause-734678 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441] <==
	* I0809 19:12:50.660436       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0809 19:12:50.660485       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0809 19:12:50.660669       1 main.go:116] setting mtu 1500 for CNI 
	I0809 19:12:50.660688       1 main.go:146] kindnetd IP family: "ipv4"
	I0809 19:12:50.660705       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0809 19:12:57.955192       1 main.go:191] Failed to get nodes, retrying after error: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	I0809 19:12:57.956400       1 main.go:191] Failed to get nodes, retrying after error: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found]
	
	* 
	* ==> kindnet [84f64260cc001cef549dda00629629686a929994b7740c80ec4068efcea903c0] <==
	* I0809 19:13:19.458574       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0809 19:13:19.458637       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0809 19:13:19.458870       1 main.go:116] setting mtu 1500 for CNI 
	I0809 19:13:19.458891       1 main.go:146] kindnetd IP family: "ipv4"
	I0809 19:13:19.458916       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0809 19:13:19.759308       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0809 19:13:19.759334       1 main.go:227] handling current node
	I0809 19:13:29.867750       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0809 19:13:29.867781       1 main.go:227] handling current node
	I0809 19:13:39.879831       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0809 19:13:39.879862       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5f5b0a3042ab0093ffe550869327a1890b14c6143f51f0e15fae74001562efc4] <==
	* I0809 19:13:18.500464       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0809 19:13:18.501479       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0809 19:13:18.501458       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0809 19:13:18.501469       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0809 19:13:18.503499       1 controller.go:83] Starting OpenAPI AggregationController
	I0809 19:13:18.676279       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0809 19:13:18.677847       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0809 19:13:18.677986       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0809 19:13:18.754653       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0809 19:13:18.754793       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0809 19:13:18.754815       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0809 19:13:18.754912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0809 19:13:18.754950       1 shared_informer.go:318] Caches are synced for configmaps
	I0809 19:13:18.755113       1 aggregator.go:152] initial CRD sync complete...
	I0809 19:13:18.755133       1 autoregister_controller.go:141] Starting autoregister controller
	I0809 19:13:18.755140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0809 19:13:18.755148       1 cache.go:39] Caches are synced for autoregister controller
	I0809 19:13:18.755305       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0809 19:13:19.253730       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0809 19:13:19.504043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0809 19:13:20.878673       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0809 19:13:20.971984       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0809 19:13:20.981283       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0809 19:13:21.029826       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0809 19:13:21.037797       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [f112396efdb77643d61e214e783dfe5ff3446c800c5ca9b473e5bc872478e5f6] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.655106       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.683337       1 logging.go:59] [core] [Channel #122 SubChannel #123] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0809 19:13:13.687837       1 logging.go:59] [core] [Channel #68 SubChannel #69] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [68a9586fd828423d665f37ecb232fa460fbd27ae96a48b6e0d39492ad016352e] <==
	* I0809 19:13:31.705065       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-734678"
	I0809 19:13:31.705193       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0809 19:13:31.707880       1 shared_informer.go:318] Caches are synced for node
	I0809 19:13:31.707967       1 range_allocator.go:174] "Sending events to api server"
	I0809 19:13:31.707920       1 shared_informer.go:318] Caches are synced for attach detach
	I0809 19:13:31.707994       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0809 19:13:31.708000       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0809 19:13:31.708007       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0809 19:13:31.709437       1 shared_informer.go:318] Caches are synced for PVC protection
	I0809 19:13:31.714511       1 shared_informer.go:318] Caches are synced for endpoint
	I0809 19:13:31.718409       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0809 19:13:31.745918       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0809 19:13:31.746106       1 shared_informer.go:318] Caches are synced for GC
	I0809 19:13:31.758283       1 shared_informer.go:318] Caches are synced for job
	I0809 19:13:31.810868       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0809 19:13:31.845420       1 shared_informer.go:318] Caches are synced for crt configmap
	I0809 19:13:31.846651       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0809 19:13:31.877519       1 shared_informer.go:318] Caches are synced for disruption
	I0809 19:13:31.889075       1 shared_informer.go:318] Caches are synced for deployment
	I0809 19:13:31.896167       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0809 19:13:31.936193       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 19:13:31.951726       1 shared_informer.go:318] Caches are synced for resource quota
	I0809 19:13:32.281412       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 19:13:32.294951       1 shared_informer.go:318] Caches are synced for garbage collector
	I0809 19:13:32.294984       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [bc3074123d41a976429c2777747bda837d51d3dcc586eac5bb23fb9be27dffe2] <==
	* I0809 19:12:48.063047       1 serving.go:348] Generated self-signed cert in-memory
	I0809 19:12:48.272946       1 controllermanager.go:187] "Starting" version="v1.27.4"
	I0809 19:12:48.272971       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:12:48.273930       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0809 19:12:48.273954       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0809 19:12:48.274597       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0809 19:12:48.274664       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8] <==
	* I0809 19:12:57.961633       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0809 19:12:57.961917       1 server_others.go:110] "Detected node IP" address="192.168.85.2"
	I0809 19:12:57.961994       1 server_others.go:554] "Using iptables proxy"
	I0809 19:12:57.987270       1 server_others.go:192] "Using iptables Proxier"
	I0809 19:12:57.987316       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 19:12:57.987328       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 19:12:57.987347       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 19:12:57.987385       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 19:12:57.988336       1 server.go:658] "Version info" version="v1.27.4"
	I0809 19:12:57.988462       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:12:57.989609       1 config.go:188] "Starting service config controller"
	I0809 19:12:57.989633       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 19:12:57.989659       1 config.go:97] "Starting endpoint slice config controller"
	I0809 19:12:57.989662       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 19:12:57.990058       1 config.go:315] "Starting node config controller"
	I0809 19:12:57.990070       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 19:12:58.089991       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 19:12:58.090013       1 shared_informer.go:318] Caches are synced for service config
	I0809 19:12:58.090126       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f2a29b737960d8ad19e084cd5f3db8078bd8b5a538bb96b63d3a142e7fcb8129] <==
	* I0809 19:13:19.407431       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0809 19:13:19.407515       1 server_others.go:110] "Detected node IP" address="192.168.85.2"
	I0809 19:13:19.407539       1 server_others.go:554] "Using iptables proxy"
	I0809 19:13:19.463291       1 server_others.go:192] "Using iptables Proxier"
	I0809 19:13:19.463319       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0809 19:13:19.463326       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0809 19:13:19.463339       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0809 19:13:19.463369       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0809 19:13:19.463999       1 server.go:658] "Version info" version="v1.27.4"
	I0809 19:13:19.464020       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0809 19:13:19.465559       1 config.go:97] "Starting endpoint slice config controller"
	I0809 19:13:19.465594       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0809 19:13:19.465629       1 config.go:188] "Starting service config controller"
	I0809 19:13:19.465638       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0809 19:13:19.465718       1 config.go:315] "Starting node config controller"
	I0809 19:13:19.465738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0809 19:13:19.565728       1 shared_informer.go:318] Caches are synced for service config
	I0809 19:13:19.565741       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0809 19:13:19.565790       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [049cef752e63c67d62b488bcea64bea577b7ad37b6a3f8003e40e6f410707a21] <==
	* E0809 19:13:18.680241       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0809 19:13:18.680302       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0809 19:13:18.680600       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.680624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.680736       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0809 19:13:18.680741       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0809 19:13:18.680750       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0809 19:13:18.680758       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0809 19:13:18.680834       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0809 19:13:18.681053       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0809 19:13:18.680909       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0809 19:13:18.681072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0809 19:13:18.680978       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681097       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681130       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681144       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681268       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0809 19:13:18.681287       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0809 19:13:18.681303       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0809 19:13:18.681315       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0809 19:13:18.681319       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0809 19:13:18.681330       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0809 19:13:18.754504       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0809 19:13:18.754545       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0809 19:13:20.972108       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d5c75acd2320400227d1888ad9ca5e5ec56a1c2b9c137db426552d9917fd5fc4] <==
	* 
	* 
	* ==> kubelet <==
	* Aug 09 19:13:16 pause-734678 kubelet[3932]: I0809 19:13:16.464830    3932 kubelet_node_status.go:70] "Attempting to register node" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.761263    3932 kubelet_node_status.go:108] "Node was previously registered" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.761360    3932 kubelet_node_status.go:73] "Successfully registered node" node="pause-734678"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.762647    3932 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.763385    3932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.921160    3932 apiserver.go:52] "Watching apiserver"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.923964    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.924096    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.924152    3932 topology_manager.go:212] "Topology Admit Handler"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.927274    3932 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976258    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23d08f14-790a-46be-87b1-032c144a76cb-lib-modules\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976380    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdpzl\" (UniqueName: \"kubernetes.io/projected/7c939e8b-f847-44a3-984e-6276b66d3afc-kube-api-access-bdpzl\") pod \"coredns-5d78c9869d-zwnjn\" (UID: \"7c939e8b-f847-44a3-984e-6276b66d3afc\") " pod="kube-system/coredns-5d78c9869d-zwnjn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976462    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23d08f14-790a-46be-87b1-032c144a76cb-xtables-lock\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976492    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-lib-modules\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976590    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23d08f14-790a-46be-87b1-032c144a76cb-kube-proxy\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.976638    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-xtables-lock\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977210    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79th\" (UniqueName: \"kubernetes.io/projected/9265ba13-07f0-4c44-a920-74175ec0e07a-kube-api-access-s79th\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977311    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854kj\" (UniqueName: \"kubernetes.io/projected/23d08f14-790a-46be-87b1-032c144a76cb-kube-api-access-854kj\") pod \"kube-proxy-q25ss\" (UID: \"23d08f14-790a-46be-87b1-032c144a76cb\") " pod="kube-system/kube-proxy-q25ss"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977360    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9265ba13-07f0-4c44-a920-74175ec0e07a-cni-cfg\") pod \"kindnet-xxgzn\" (UID: \"9265ba13-07f0-4c44-a920-74175ec0e07a\") " pod="kube-system/kindnet-xxgzn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977400    3932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c939e8b-f847-44a3-984e-6276b66d3afc-config-volume\") pod \"coredns-5d78c9869d-zwnjn\" (UID: \"7c939e8b-f847-44a3-984e-6276b66d3afc\") " pod="kube-system/coredns-5d78c9869d-zwnjn"
	Aug 09 19:13:18 pause-734678 kubelet[3932]: I0809 19:13:18.977426    3932 reconciler.go:41] "Reconciler: start to sync state"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225235    3932 scope.go:115] "RemoveContainer" containerID="d222edca27c82b853c2e77a2ee623d656f0e5640f2d003d2b5a25cc08495ae21"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225335    3932 scope.go:115] "RemoveContainer" containerID="3b23f5302c00ae4a6dc8fdd99a47353d345d3eaf4a110ea7793654f4a876cff8"
	Aug 09 19:13:19 pause-734678 kubelet[3932]: I0809 19:13:19.225386    3932 scope.go:115] "RemoveContainer" containerID="4cdeeccf1e020b0f8436e2bf45bc3520fc0e59a7253a1255b7b6a12cae630441"
	Aug 09 19:13:21 pause-734678 kubelet[3932]: I0809 19:13:21.822192    3932 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-734678 -n pause-734678
helpers_test.go:261: (dbg) Run:  kubectl --context pause-734678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (72.77s)

                                                
                                    

Test pass (271/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.89
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.4/json-events 6.82
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.06
17 TestDownloadOnly/v1.28.0-rc.0/json-events 5.69
18 TestDownloadOnly/v1.28.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.19
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
25 TestDownloadOnlyKic 1.19
26 TestBinaryMirror 0.71
27 TestOffline 66.29
29 TestAddons/Setup 123.43
31 TestAddons/parallel/Registry 14.36
33 TestAddons/parallel/InspektorGadget 11.09
34 TestAddons/parallel/MetricsServer 5.73
35 TestAddons/parallel/HelmTiller 11.22
37 TestAddons/parallel/CSI 109.59
38 TestAddons/parallel/Headlamp 11.05
39 TestAddons/parallel/CloudSpanner 5.65
42 TestAddons/serial/GCPAuth/Namespaces 0.12
43 TestAddons/StoppedEnableDisable 12.13
44 TestCertOptions 28.84
45 TestCertExpiration 235.36
47 TestForceSystemdFlag 29.95
48 TestForceSystemdEnv 43.47
50 TestKVMDriverInstallOrUpdate 1.51
54 TestErrorSpam/setup 24.31
55 TestErrorSpam/start 0.57
56 TestErrorSpam/status 0.85
57 TestErrorSpam/pause 1.49
58 TestErrorSpam/unpause 1.5
59 TestErrorSpam/stop 1.37
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 42.21
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 29.66
66 TestFunctional/serial/KubeContext 0.04
67 TestFunctional/serial/KubectlGetPods 0.06
70 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
71 TestFunctional/serial/CacheCmd/cache/add_local 0.91
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
73 TestFunctional/serial/CacheCmd/cache/list 0.04
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
75 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
76 TestFunctional/serial/CacheCmd/cache/delete 0.09
77 TestFunctional/serial/MinikubeKubectlCmd 0.1
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
79 TestFunctional/serial/ExtraConfig 32.13
80 TestFunctional/serial/ComponentHealth 0.07
81 TestFunctional/serial/LogsCmd 1.34
82 TestFunctional/serial/LogsFileCmd 1.37
83 TestFunctional/serial/InvalidService 4.01
85 TestFunctional/parallel/ConfigCmd 0.33
86 TestFunctional/parallel/DashboardCmd 14.12
87 TestFunctional/parallel/DryRun 0.35
88 TestFunctional/parallel/InternationalLanguage 0.16
89 TestFunctional/parallel/StatusCmd 0.89
93 TestFunctional/parallel/ServiceCmdConnect 7.72
94 TestFunctional/parallel/AddonsCmd 0.13
95 TestFunctional/parallel/PersistentVolumeClaim 25.99
97 TestFunctional/parallel/SSHCmd 0.52
98 TestFunctional/parallel/CpCmd 1.26
99 TestFunctional/parallel/MySQL 24.72
100 TestFunctional/parallel/FileSync 0.34
101 TestFunctional/parallel/CertSync 1.87
105 TestFunctional/parallel/NodeLabels 0.07
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
109 TestFunctional/parallel/License 0.15
110 TestFunctional/parallel/Version/short 0.05
111 TestFunctional/parallel/Version/components 0.51
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.39
117 TestFunctional/parallel/ImageCommands/Setup 1.06
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.76
122 TestFunctional/parallel/ServiceCmd/DeployApp 24.2
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.11
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.13
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
131 TestFunctional/parallel/ImageCommands/ImageRemove 1.35
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.37
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
134 TestFunctional/parallel/ServiceCmd/List 0.51
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.9
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
143 TestFunctional/parallel/ServiceCmd/Format 0.53
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
145 TestFunctional/parallel/ServiceCmd/URL 0.57
146 TestFunctional/parallel/ProfileCmd/profile_list 0.35
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
148 TestFunctional/parallel/MountCmd/any-port 9.67
149 TestFunctional/parallel/MountCmd/specific-port 1.98
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
151 TestFunctional/delete_addon-resizer_images 0.1
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestIngressAddonLegacy/StartLegacyK8sCluster 80.13
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.8
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
164 TestJSONOutput/start/Command 67.34
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.65
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.6
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.72
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.2
189 TestKicCustomNetwork/create_custom_network 32.41
190 TestKicCustomNetwork/use_default_bridge_network 24.99
191 TestKicExistingNetwork 24.17
192 TestKicCustomSubnet 24.15
193 TestKicStaticIP 23.95
194 TestMainNoArgs 0.05
195 TestMinikubeProfile 51.94
198 TestMountStart/serial/StartWithMountFirst 5.48
199 TestMountStart/serial/VerifyMountFirst 0.25
200 TestMountStart/serial/StartWithMountSecond 7.96
201 TestMountStart/serial/VerifyMountSecond 0.24
202 TestMountStart/serial/DeleteFirst 1.62
203 TestMountStart/serial/VerifyMountPostDelete 0.25
204 TestMountStart/serial/Stop 1.19
205 TestMountStart/serial/RestartStopped 6.94
206 TestMountStart/serial/VerifyMountPostStop 0.24
209 TestMultiNode/serial/FreshStart2Nodes 88.3
210 TestMultiNode/serial/DeployApp2Nodes 4.06
212 TestMultiNode/serial/AddNode 49.01
213 TestMultiNode/serial/ProfileList 0.27
214 TestMultiNode/serial/CopyFile 9.05
215 TestMultiNode/serial/StopNode 2.1
216 TestMultiNode/serial/StartAfterStop 10.63
217 TestMultiNode/serial/RestartKeepsNodes 115.02
218 TestMultiNode/serial/DeleteNode 4.67
219 TestMultiNode/serial/StopMultiNode 23.95
220 TestMultiNode/serial/RestartMultiNode 72.37
221 TestMultiNode/serial/ValidateNameConflict 25.94
226 TestPreload 125.33
228 TestScheduledStopUnix 97.65
231 TestInsufficientStorage 12.85
234 TestKubernetesUpgrade 357.07
235 TestMissingContainerUpgrade 149.72
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
238 TestNoKubernetes/serial/StartWithK8s 39.31
239 TestNoKubernetes/serial/StartWithStopK8s 14.34
240 TestNoKubernetes/serial/Start 8.51
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
242 TestNoKubernetes/serial/ProfileList 4.46
243 TestNoKubernetes/serial/Stop 1.26
244 TestNoKubernetes/serial/StartNoArgs 7.37
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
253 TestNetworkPlugins/group/false 3.83
257 TestStoppedBinaryUpgrade/Setup 0.4
260 TestPause/serial/Start 76.16
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.52
269 TestNetworkPlugins/group/auto/Start 69.08
271 TestNetworkPlugins/group/auto/KubeletFlags 0.26
272 TestNetworkPlugins/group/auto/NetCatPod 9.37
273 TestNetworkPlugins/group/auto/DNS 0.16
274 TestNetworkPlugins/group/auto/Localhost 0.15
275 TestNetworkPlugins/group/auto/HairPin 0.15
276 TestNetworkPlugins/group/kindnet/Start 74.58
277 TestNetworkPlugins/group/calico/Start 63.29
278 TestNetworkPlugins/group/custom-flannel/Start 56.01
279 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
280 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
282 TestNetworkPlugins/group/calico/ControllerPod 5.02
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
284 TestNetworkPlugins/group/kindnet/NetCatPod 9.33
285 TestNetworkPlugins/group/custom-flannel/DNS 0.25
286 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
287 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
288 TestNetworkPlugins/group/calico/KubeletFlags 0.29
289 TestNetworkPlugins/group/calico/NetCatPod 10.34
290 TestNetworkPlugins/group/kindnet/DNS 0.19
291 TestNetworkPlugins/group/kindnet/Localhost 0.18
292 TestNetworkPlugins/group/kindnet/HairPin 0.17
293 TestNetworkPlugins/group/calico/DNS 0.15
294 TestNetworkPlugins/group/calico/Localhost 0.14
295 TestNetworkPlugins/group/calico/HairPin 0.15
296 TestNetworkPlugins/group/enable-default-cni/Start 82.17
297 TestNetworkPlugins/group/flannel/Start 58.52
298 TestNetworkPlugins/group/bridge/Start 40.53
300 TestStartStop/group/old-k8s-version/serial/FirstStart 134
301 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
302 TestNetworkPlugins/group/bridge/NetCatPod 11.39
303 TestNetworkPlugins/group/bridge/DNS 0.18
304 TestNetworkPlugins/group/flannel/ControllerPod 5.02
305 TestNetworkPlugins/group/bridge/Localhost 0.15
306 TestNetworkPlugins/group/bridge/HairPin 0.15
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
308 TestNetworkPlugins/group/flannel/NetCatPod 11.32
309 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
310 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.35
311 TestNetworkPlugins/group/flannel/DNS 0.2
312 TestNetworkPlugins/group/flannel/Localhost 0.2
313 TestNetworkPlugins/group/flannel/HairPin 0.18
315 TestStartStop/group/no-preload/serial/FirstStart 61.53
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
320 TestStartStop/group/embed-certs/serial/FirstStart 73.33
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.67
323 TestStartStop/group/no-preload/serial/DeployApp 8.41
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.02
325 TestStartStop/group/no-preload/serial/Stop 11.92
326 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
327 TestStartStop/group/no-preload/serial/SecondStart 340.82
328 TestStartStop/group/embed-certs/serial/DeployApp 8.4
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
330 TestStartStop/group/embed-certs/serial/Stop 12.01
331 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.53
333 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
334 TestStartStop/group/old-k8s-version/serial/Stop 12.01
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
338 TestStartStop/group/embed-certs/serial/SecondStart 343.01
339 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
340 TestStartStop/group/old-k8s-version/serial/SecondStart 430.04
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
342 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 342.11
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
346 TestStartStop/group/no-preload/serial/Pause 2.89
348 TestStartStop/group/newest-cni/serial/FirstStart 40.29
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.02
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.06
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
353 TestStartStop/group/embed-certs/serial/Pause 3.52
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.91
357 TestStartStop/group/newest-cni/serial/DeployApp 0
358 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
359 TestStartStop/group/newest-cni/serial/Stop 2.07
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
361 TestStartStop/group/newest-cni/serial/SecondStart 26.25
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
365 TestStartStop/group/newest-cni/serial/Pause 2.49
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
369 TestStartStop/group/old-k8s-version/serial/Pause 2.62
x
+
TestDownloadOnly/v1.16.0/json-events (4.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.885181296s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-649799
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-649799: exit status 85 (60.936031ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:39:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:39:04.907906  823446 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:39:04.908057  823446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:04.908067  823446 out.go:309] Setting ErrFile to fd 2...
	I0809 18:39:04.908071  823446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:04.908322  823446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	W0809 18:39:04.908464  823446 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: open /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: no such file or directory
	I0809 18:39:04.909108  823446 out.go:303] Setting JSON to true
	I0809 18:39:04.910628  823446 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8500,"bootTime":1691597845,"procs":787,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:39:04.910693  823446 start.go:138] virtualization: kvm guest
	I0809 18:39:04.913253  823446 out.go:97] [download-only-649799] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:39:04.914894  823446 out.go:169] MINIKUBE_LOCATION=17011
	W0809 18:39:04.913404  823446 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball: no such file or directory
	I0809 18:39:04.913476  823446 notify.go:220] Checking for updates...
	I0809 18:39:04.917844  823446 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:39:04.919232  823446 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:39:04.920595  823446 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:39:04.923428  823446 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0809 18:39:04.926185  823446 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 18:39:04.926488  823446 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:39:04.949020  823446 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:39:04.949141  823446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:05.001073  823446 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-09 18:39:04.991894228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:05.001221  823446 docker.go:294] overlay module found
	I0809 18:39:05.003301  823446 out.go:97] Using the docker driver based on user configuration
	I0809 18:39:05.003338  823446 start.go:298] selected driver: docker
	I0809 18:39:05.003346  823446 start.go:901] validating driver "docker" against <nil>
	I0809 18:39:05.003453  823446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:05.059713  823446 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-09 18:39:05.051443565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:05.059879  823446 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0809 18:39:05.060350  823446 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0809 18:39:05.060487  823446 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0809 18:39:05.062404  823446 out.go:169] Using Docker driver with root privileges
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-649799"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (6.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.824078417s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (6.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-649799
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-649799: exit status 85 (61.619714ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:39:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:39:09.857057  823589 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:39:09.857197  823589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:09.857206  823589 out.go:309] Setting ErrFile to fd 2...
	I0809 18:39:09.857210  823589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:09.857400  823589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	W0809 18:39:09.857513  823589 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: open /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: no such file or directory
	I0809 18:39:09.857938  823589 out.go:303] Setting JSON to true
	I0809 18:39:09.859362  823589 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8505,"bootTime":1691597845,"procs":783,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:39:09.859426  823589 start.go:138] virtualization: kvm guest
	I0809 18:39:09.861481  823589 out.go:97] [download-only-649799] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:39:09.863063  823589 out.go:169] MINIKUBE_LOCATION=17011
	I0809 18:39:09.861688  823589 notify.go:220] Checking for updates...
	I0809 18:39:09.866210  823589 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:39:09.867654  823589 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:39:09.869446  823589 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:39:09.870973  823589 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0809 18:39:09.873652  823589 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 18:39:09.874052  823589 config.go:182] Loaded profile config "download-only-649799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0809 18:39:09.874113  823589 start.go:809] api.Load failed for download-only-649799: filestore "download-only-649799": Docker machine "download-only-649799" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 18:39:09.874208  823589 driver.go:373] Setting default libvirt URI to qemu:///system
	W0809 18:39:09.874238  823589 start.go:809] api.Load failed for download-only-649799: filestore "download-only-649799": Docker machine "download-only-649799" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 18:39:09.895341  823589 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:39:09.895432  823589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:09.947967  823589 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-09 18:39:09.939196778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:09.948073  823589 docker.go:294] overlay module found
	I0809 18:39:09.949741  823589 out.go:97] Using the docker driver based on existing profile
	I0809 18:39:09.949761  823589 start.go:298] selected driver: docker
	I0809 18:39:09.949767  823589 start.go:901] validating driver "docker" against &{Name:download-only-649799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-649799 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:09.949925  823589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:10.001112  823589 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-09 18:39:09.992841013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:10.001735  823589 cni.go:84] Creating CNI manager for ""
	I0809 18:39:10.001752  823589 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:39:10.001763  823589 start_flags.go:319] config:
	{Name:download-only-649799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-649799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:10.003552  823589 out.go:97] Starting control plane node download-only-649799 in cluster download-only-649799
	I0809 18:39:10.003567  823589 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:39:10.004866  823589 out.go:97] Pulling base image ...
	I0809 18:39:10.004898  823589 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:39:10.005011  823589 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:39:10.020330  823589 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0809 18:39:10.020463  823589 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0809 18:39:10.020479  823589 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0809 18:39:10.020483  823589 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0809 18:39:10.020499  823589 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0809 18:39:10.031593  823589 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0809 18:39:10.031622  823589 cache.go:57] Caching tarball of preloaded images
	I0809 18:39:10.031785  823589 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0809 18:39:10.033593  823589 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0809 18:39:10.033606  823589 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:39:10.059488  823589 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:8fb3cf29e31ee2994fdad70ff1ffc061 -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-649799"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/json-events (5.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-649799 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.691779885s)
--- PASS: TestDownloadOnly/v1.28.0-rc.0/json-events (5.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-649799
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-649799: exit status 85 (59.89844ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-649799 | jenkins | v1.31.1 | 09 Aug 23 18:39 UTC |          |
	|         | -p download-only-649799           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/09 18:39:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0809 18:39:16.744349  823733 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:39:16.744560  823733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:16.744577  823733 out.go:309] Setting ErrFile to fd 2...
	I0809 18:39:16.744582  823733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:39:16.744844  823733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	W0809 18:39:16.745011  823733 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: open /home/jenkins/minikube-integration/17011-816603/.minikube/config/config.json: no such file or directory
	I0809 18:39:16.745517  823733 out.go:303] Setting JSON to true
	I0809 18:39:16.747061  823733 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8512,"bootTime":1691597845,"procs":783,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:39:16.747145  823733 start.go:138] virtualization: kvm guest
	I0809 18:39:16.749093  823733 out.go:97] [download-only-649799] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:39:16.750783  823733 out.go:169] MINIKUBE_LOCATION=17011
	I0809 18:39:16.749327  823733 notify.go:220] Checking for updates...
	I0809 18:39:16.753941  823733 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:39:16.756611  823733 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:39:16.757846  823733 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:39:16.759042  823733 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0809 18:39:16.761245  823733 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0809 18:39:16.761632  823733 config.go:182] Loaded profile config "download-only-649799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0809 18:39:16.761673  823733 start.go:809] api.Load failed for download-only-649799: filestore "download-only-649799": Docker machine "download-only-649799" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 18:39:16.761770  823733 driver.go:373] Setting default libvirt URI to qemu:///system
	W0809 18:39:16.761798  823733 start.go:809] api.Load failed for download-only-649799: filestore "download-only-649799": Docker machine "download-only-649799" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0809 18:39:16.783675  823733 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:39:16.783761  823733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:16.835942  823733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-09 18:39:16.827779275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:16.836053  823733 docker.go:294] overlay module found
	I0809 18:39:16.837339  823733 out.go:97] Using the docker driver based on existing profile
	I0809 18:39:16.837359  823733 start.go:298] selected driver: docker
	I0809 18:39:16.837364  823733 start.go:901] validating driver "docker" against &{Name:download-only-649799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-649799 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:16.837507  823733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:39:16.889036  823733 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-09 18:39:16.88084021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:39:16.889667  823733 cni.go:84] Creating CNI manager for ""
	I0809 18:39:16.889686  823733 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0809 18:39:16.889698  823733 start_flags.go:319] config:
	{Name:download-only-649799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.0 ClusterName:download-only-649799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:39:16.891255  823733 out.go:97] Starting control plane node download-only-649799 in cluster download-only-649799
	I0809 18:39:16.891271  823733 cache.go:122] Beginning downloading kic base image for docker with crio
	I0809 18:39:16.892520  823733 out.go:97] Pulling base image ...
	I0809 18:39:16.892551  823733 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime crio
	I0809 18:39:16.892675  823733 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local docker daemon
	I0809 18:39:16.908222  823733 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 to local cache
	I0809 18:39:16.908389  823733 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory
	I0809 18:39:16.908411  823733 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 in local cache directory, skipping pull
	I0809 18:39:16.908418  823733 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 exists in cache, skipping pull
	I0809 18:39:16.908434  823733 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 as a tarball
	I0809 18:39:16.915826  823733 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0809 18:39:16.915849  823733 cache.go:57] Caching tarball of preloaded images
	I0809 18:39:16.915959  823733 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.0 and runtime crio
	I0809 18:39:16.917471  823733 out.go:97] Downloading Kubernetes v1.28.0-rc.0 preload ...
	I0809 18:39:16.917486  823733 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:39:16.941986  823733 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.0/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:26765fc139d2f2cc3a3903e63346a30b -> /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0809 18:39:20.954799  823733 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0809 18:39:20.954907  823733 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17011-816603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-649799"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-649799
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.19s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-720217 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-720217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-720217
--- PASS: TestDownloadOnlyKic (1.19s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-109939 --alsologtostderr --binary-mirror http://127.0.0.1:40831 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-109939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-109939
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (66.29s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-957949 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-957949 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m3.610029423s)
helpers_test.go:175: Cleaning up "offline-crio-957949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-957949
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-957949: (2.678649139s)
--- PASS: TestOffline (66.29s)

                                                
                                    
x
+
TestAddons/Setup (123.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-922218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-922218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m3.433425859s)
--- PASS: TestAddons/Setup (123.43s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 11.540597ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wbxwm" [c9a46261-ec6a-4774-a0f1-91725dcd00f5] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012219003s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tfhw2" [321dcadb-ef6a-4c90-9825-67bd7009204e] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0225154s
addons_test.go:316: (dbg) Run:  kubectl --context addons-922218 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-922218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-922218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.524697199s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 ip
2023/08/09 18:41:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kkjkb" [9a48d2b0-38a6-49ac-ab7d-df4f175eb7fc] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011357049s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-922218
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-922218: (6.081707198s)
--- PASS: TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.886709ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-wthr7" [c9e9d783-2d9b-419b-b850-f0452b5d09b8] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01333619s
addons_test.go:391: (dbg) Run:  kubectl --context addons-922218 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 2.940404ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-dbvcr" [2bef95be-3c35-4dbf-99c8-04b23626ce95] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012072189s
addons_test.go:449: (dbg) Run:  kubectl --context addons-922218 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-922218 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.651409703s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (109.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 12.685977ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-922218 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-922218 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ecf6126c-2415-416b-8e63-4fb62fa8f1fd] Pending
helpers_test.go:344: "task-pv-pod" [ecf6126c-2415-416b-8e63-4fb62fa8f1fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ecf6126c-2415-416b-8e63-4fb62fa8f1fd] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.008882711s
addons_test.go:560: (dbg) Run:  kubectl --context addons-922218 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-922218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-922218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-922218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-922218 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-922218 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-922218 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-922218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-922218 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [514b3abd-13f9-4596-9a8f-a5a07e25ee8b] Pending
helpers_test.go:344: "task-pv-pod-restore" [514b3abd-13f9-4596-9a8f-a5a07e25ee8b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [514b3abd-13f9-4596-9a8f-a5a07e25ee8b] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.009242973s
addons_test.go:602: (dbg) Run:  kubectl --context addons-922218 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-922218 delete pod task-pv-pod-restore: (1.272633161s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-922218 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-922218 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-922218 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.637970144s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-922218 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (109.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-922218 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-922218 --alsologtostderr -v=1: (1.036219687s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-t8p2k" [53cd7579-eae3-4676-a5ab-bf7a7005dd0f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-t8p2k" [53cd7579-eae3-4676-a5ab-bf7a7005dd0f] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-t8p2k" [53cd7579-eae3-4676-a5ab-bf7a7005dd0f] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.009729929s
--- PASS: TestAddons/parallel/Headlamp (11.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-69ddd4cd75-z7rx6" [642586c4-8565-418d-a46b-70cc9effe354] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011532047s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-922218
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-922218 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-922218 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-922218
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-922218: (11.901750795s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-922218
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-922218
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-922218
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (28.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-425713 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-425713 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.394015926s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-425713 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-425713 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-425713 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-425713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-425713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-425713: (2.826767797s)
--- PASS: TestCertOptions (28.84s)

                                                
                                    
x
+
TestCertExpiration (235.36s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-023346 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-023346 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.961460655s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-023346 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-023346 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.469045666s)
helpers_test.go:175: Cleaning up "cert-expiration-023346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-023346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-023346: (1.927180858s)
--- PASS: TestCertExpiration (235.36s)

                                                
                                    
x
+
TestForceSystemdFlag (29.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-651963 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-651963 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.970239259s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-651963 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-651963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-651963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-651963: (2.630291778s)
--- PASS: TestForceSystemdFlag (29.95s)

                                                
                                    
x
+
TestForceSystemdEnv (43.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-004100 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-004100 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.234270303s)
helpers_test.go:175: Cleaning up "force-systemd-env-004100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-004100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-004100: (5.232986022s)
--- PASS: TestForceSystemdEnv (43.47s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.51s)

                                                
                                    
x
+
TestErrorSpam/setup (24.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-140312 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140312 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-140312 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140312 --driver=docker  --container-runtime=crio: (24.308175204s)
--- PASS: TestErrorSpam/setup (24.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 stop: (1.200158438s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140312 --log_dir /tmp/nospam-140312 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17011-816603/.minikube/files/etc/test/nested/copy/823434/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-421935 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.206754907s)
--- PASS: TestFunctional/serial/StartWithProxy (42.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --alsologtostderr -v=8
E0809 18:46:28.240779  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.246755  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.257025  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.277317  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.318387  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.399029  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.559772  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:28.880289  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:29.521474  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:30.802039  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:46:33.362215  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-421935 --alsologtostderr -v=8: (29.654938244s)
functional_test.go:659: soft start took 29.655687764s for "functional-421935" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-421935 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache add registry.k8s.io/pause:latest
E0809 18:46:38.483037  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-421935 /tmp/TestFunctionalserialCacheCmdcacheadd_local1987644348/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache add minikube-local-cache-test:functional-421935
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache delete minikube-local-cache-test:functional-421935
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-421935
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.118689ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 kubectl -- --context functional-421935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-421935 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0809 18:46:48.723824  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 18:47:09.204925  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-421935 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.127456522s)
functional_test.go:757: restart took 32.127599497s for "functional-421935" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-421935 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 logs: (1.3427197s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 logs --file /tmp/TestFunctionalserialLogsFileCmd3979821496/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 logs --file /tmp/TestFunctionalserialLogsFileCmd3979821496/001/logs.txt: (1.370828991s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-421935 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-421935
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-421935: exit status 115 (336.165119ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32632 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-421935 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 config get cpus: exit status 14 (54.239323ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 config get cpus: exit status 14 (52.814493ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-421935 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-421935 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 858733: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-421935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (149.574671ms)

                                                
                                                
-- stdout --
	* [functional-421935] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 18:47:49.583694  858112 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:47:49.583831  858112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:47:49.583841  858112 out.go:309] Setting ErrFile to fd 2...
	I0809 18:47:49.583845  858112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:47:49.584062  858112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 18:47:49.584728  858112 out.go:303] Setting JSON to false
	I0809 18:47:49.585920  858112 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9025,"bootTime":1691597845,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:47:49.585985  858112 start.go:138] virtualization: kvm guest
	I0809 18:47:49.588260  858112 out.go:177] * [functional-421935] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 18:47:49.589721  858112 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 18:47:49.591048  858112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:47:49.589747  858112 notify.go:220] Checking for updates...
	I0809 18:47:49.592679  858112 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:47:49.594032  858112 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:47:49.595720  858112 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 18:47:49.597156  858112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 18:47:49.598856  858112 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:47:49.599260  858112 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:47:49.624217  858112 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:47:49.624307  858112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:47:49.680473  858112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-08-09 18:47:49.671586702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:47:49.680570  858112 docker.go:294] overlay module found
	I0809 18:47:49.682193  858112 out.go:177] * Using the docker driver based on existing profile
	I0809 18:47:49.683549  858112 start.go:298] selected driver: docker
	I0809 18:47:49.683562  858112 start.go:901] validating driver "docker" against &{Name:functional-421935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-421935 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:47:49.683713  858112 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 18:47:49.685844  858112 out.go:177] 
	W0809 18:47:49.687219  858112 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0809 18:47:49.688582  858112 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-421935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-421935 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.668213ms)

                                                
                                                
-- stdout --
	* [functional-421935] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 18:47:54.376916  858925 out.go:296] Setting OutFile to fd 1 ...
	I0809 18:47:54.377064  858925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:47:54.377075  858925 out.go:309] Setting ErrFile to fd 2...
	I0809 18:47:54.377080  858925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 18:47:54.377389  858925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 18:47:54.377980  858925 out.go:303] Setting JSON to false
	I0809 18:47:54.380827  858925 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9030,"bootTime":1691597845,"procs":476,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 18:47:54.380926  858925 start.go:138] virtualization: kvm guest
	I0809 18:47:54.382978  858925 out.go:177] * [functional-421935] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	I0809 18:47:54.384870  858925 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 18:47:54.384585  858925 notify.go:220] Checking for updates...
	I0809 18:47:54.386350  858925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 18:47:54.388550  858925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 18:47:54.390274  858925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 18:47:54.391857  858925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 18:47:54.393807  858925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 18:47:54.395887  858925 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 18:47:54.396347  858925 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 18:47:54.420228  858925 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 18:47:54.420339  858925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 18:47:54.480553  858925 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-08-09 18:47:54.470678768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 18:47:54.480699  858925 docker.go:294] overlay module found
	I0809 18:47:54.482744  858925 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0809 18:47:54.484219  858925 start.go:298] selected driver: docker
	I0809 18:47:54.484242  858925 start.go:901] validating driver "docker" against &{Name:functional-421935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1690799191-16971@sha256:e2b8a0768c6a1fd3ed0453a7caf63756620121eab0a25a3ecf9665353865fd37 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-421935 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0809 18:47:54.484392  858925 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 18:47:54.486460  858925 out.go:177] 
	W0809 18:47:54.487850  858925 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0809 18:47:54.489229  858925 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-421935 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-421935 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-ndtn6" [cad531d8-0054-4085-a79f-5787156d91f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-ndtn6" [cad531d8-0054-4085-a79f-5787156d91f6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00962774s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31256
functional_test.go:1674: http://192.168.49.2:31256: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-ndtn6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31256
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [23d65cb1-ff3d-4235-9d2c-5f7fdd04431f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.046015701s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-421935 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-421935 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-421935 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8161146a-74dd-4f77-9ec9-626cb521e873] Pending
helpers_test.go:344: "sp-pod" [8161146a-74dd-4f77-9ec9-626cb521e873] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8161146a-74dd-4f77-9ec9-626cb521e873] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010010045s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-421935 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-421935 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-421935 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ab3106e7-15b1-41f1-a0e1-e8275fe77f1e] Pending
helpers_test.go:344: "sp-pod" [ab3106e7-15b1-41f1-a0e1-e8275fe77f1e] Running
2023/08/09 18:48:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010464478s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-421935 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh -n functional-421935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 cp functional-421935:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3965030807/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh -n functional-421935 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-421935 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-554rl" [bbbf12bc-44af-48bc-9d93-b92049fddb98] Pending
helpers_test.go:344: "mysql-7db894d786-554rl" [bbbf12bc-44af-48bc-9d93-b92049fddb98] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-554rl" [bbbf12bc-44af-48bc-9d93-b92049fddb98] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.026694382s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;": exit status 1 (211.079981ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;": exit status 1 (222.87042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;": exit status 1 (138.03757ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-421935 exec mysql-7db894d786-554rl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/823434/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /etc/test/nested/copy/823434/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/823434.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /etc/ssl/certs/823434.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/823434.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /usr/share/ca-certificates/823434.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/8234342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /etc/ssl/certs/8234342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/8234342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /usr/share/ca-certificates/8234342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-421935 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "sudo systemctl is-active docker": exit status 1 (311.61383ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "sudo systemctl is-active containerd": exit status 1 (312.903958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-421935 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-421935
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-421935 image ls --format short --alsologtostderr:
I0809 18:47:55.504696  859530 out.go:296] Setting OutFile to fd 1 ...
I0809 18:47:55.504815  859530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:55.504825  859530 out.go:309] Setting ErrFile to fd 2...
I0809 18:47:55.504829  859530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:55.505046  859530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
I0809 18:47:55.505616  859530 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:55.505720  859530 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:55.506161  859530 cli_runner.go:164] Run: docker container inspect functional-421935 --format={{.State.Status}}
I0809 18:47:55.524868  859530 ssh_runner.go:195] Run: systemctl --version
I0809 18:47:55.524926  859530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421935
I0809 18:47:55.541424  859530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/functional-421935/id_rsa Username:docker}
I0809 18:47:55.640056  859530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-421935 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-421935  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.4            | f466468864b7a | 114MB  |
| registry.k8s.io/kube-proxy              | v1.27.4            | 6848d7eda0341 | 72.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-421935  | ee7869978a36a | 1.47MB |
| docker.io/library/nginx                 | alpine             | 414132ff3b076 | 43.2MB |
| docker.io/library/nginx                 | latest             | 89da1fb6dcb96 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.27.4            | e7972205b6614 | 122MB  |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 98ef2570f3cde | 59.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-421935 image ls --format table --alsologtostderr:
I0809 18:47:58.629644  860334 out.go:296] Setting OutFile to fd 1 ...
I0809 18:47:58.629780  860334 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:58.629788  860334 out.go:309] Setting ErrFile to fd 2...
I0809 18:47:58.629792  860334 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:58.630000  860334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
I0809 18:47:58.630596  860334 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:58.630697  860334 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:58.631126  860334 cli_runner.go:164] Run: docker container inspect functional-421935 --format={{.State.Status}}
I0809 18:47:58.650184  860334 ssh_runner.go:195] Run: systemctl --version
I0809 18:47:58.650228  860334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421935
I0809 18:47:58.669485  860334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/functional-421935/id_rsa Username:docker}
I0809 18:47:58.768368  860334 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-421935 image ls --format json --alsologtostderr:
[{"id":"89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287","repoDigests":["docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca","docker.io/library/nginx@sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7"],"repoTags":["docker.io/library/nginx:latest"],"size":"191049983"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1
e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"f07de8e87ef9e66bd8a2a513e23d5eb21b6178b479bc42694b44787728ee606a","repoDigests":["docker.io/library/b8b2a94f61a437b881212bc6fd47ea54dc89d25df3270e8e4d2afea26d3c4d5d-tmp@sha256:afa90a5c5652f1ccfb78a9849f54dbe248797a85683ea4b95e7f3f3d35f84715"],"repoTags":[],"size":"1465612"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b5
9cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-421935"],"size":"34114467"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"86b6af7dd652c1b38118be1c338e9354b33
469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"414132ff3b076936528928c823b4f3d1e1178b2692ae04defc8f8fdfd0a83a03","repoDigests":["docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c","docker.io/library/nginx@sha256:ccf066d2cfec0cfe57a63cf26f4b7cabbea80e11ab5b7f1cc11a1b5efd65ea0b"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43233068"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":
"ee7869978a36a84657f7cce6067f2c25591280fbc18bfcc9b82ecfc05a640388","repoDigests":["localhost/my-image@sha256:38de343fe4c7dde52673a2623d82a291050138b9c88323615398d2d3d4c27415"],"repoTags":["localhost/my-image:functional-421935"],"size":"1468193"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"72714135"},{"id":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","repoDigests":["registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8e
f904bf5156583ffdb6a733ab04af","registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"59814710"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-
scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"122078160"},{"id":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"113931062"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83a
ab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-421935 image ls --format json --alsologtostderr:
I0809 18:47:58.349049  860222 out.go:296] Setting OutFile to fd 1 ...
I0809 18:47:58.349247  860222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:58.349259  860222 out.go:309] Setting ErrFile to fd 2...
I0809 18:47:58.349267  860222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:58.349560  860222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
I0809 18:47:58.350257  860222 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:58.350387  860222 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:58.352426  860222 cli_runner.go:164] Run: docker container inspect functional-421935 --format={{.State.Status}}
I0809 18:47:58.374442  860222 ssh_runner.go:195] Run: systemctl --version
I0809 18:47:58.374506  860222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421935
I0809 18:47:58.397807  860222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/functional-421935/id_rsa Username:docker}
I0809 18:47:58.508045  860222 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-421935 image ls --format yaml --alsologtostderr:
- id: f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "113931062"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "72714135"
- id: 98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
- registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "59814710"
- id: 89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287
repoDigests:
- docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
- docker.io/library/nginx@sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7
repoTags:
- docker.io/library/nginx:latest
size: "191049983"
- id: e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "122078160"
- id: 414132ff3b076936528928c823b4f3d1e1178b2692ae04defc8f8fdfd0a83a03
repoDigests:
- docker.io/library/nginx@sha256:647c5c83418c19eef0cddc647b9899326e3081576390c4c7baa4fce545123b6c
- docker.io/library/nginx@sha256:ccf066d2cfec0cfe57a63cf26f4b7cabbea80e11ab5b7f1cc11a1b5efd65ea0b
repoTags:
- docker.io/library/nginx:alpine
size: "43233068"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-421935
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-421935 image ls --format yaml --alsologtostderr:
I0809 18:47:55.725145  859581 out.go:296] Setting OutFile to fd 1 ...
I0809 18:47:55.725498  859581 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:55.725511  859581 out.go:309] Setting ErrFile to fd 2...
I0809 18:47:55.725518  859581 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:55.726116  859581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
I0809 18:47:55.727089  859581 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:55.727213  859581 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:55.727623  859581 cli_runner.go:164] Run: docker container inspect functional-421935 --format={{.State.Status}}
I0809 18:47:55.747393  859581 ssh_runner.go:195] Run: systemctl --version
I0809 18:47:55.747449  859581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421935
I0809 18:47:55.764475  859581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/functional-421935/id_rsa Username:docker}
I0809 18:47:55.856088  859581 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh pgrep buildkitd: exit status 1 (267.65955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image build -t localhost/my-image:functional-421935 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 image build -t localhost/my-image:functional-421935 testdata/build --alsologtostderr: (1.843274458s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-421935 image build -t localhost/my-image:functional-421935 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f07de8e87ef
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-421935
--> ee7869978a3
Successfully tagged localhost/my-image:functional-421935
ee7869978a36a84657f7cce6067f2c25591280fbc18bfcc9b82ecfc05a640388
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-421935 image build -t localhost/my-image:functional-421935 testdata/build --alsologtostderr:
I0809 18:47:56.218424  859719 out.go:296] Setting OutFile to fd 1 ...
I0809 18:47:56.218603  859719 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:56.218614  859719 out.go:309] Setting ErrFile to fd 2...
I0809 18:47:56.218621  859719 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0809 18:47:56.218938  859719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
I0809 18:47:56.219754  859719 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:56.220283  859719 config.go:182] Loaded profile config "functional-421935": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0809 18:47:56.220674  859719 cli_runner.go:164] Run: docker container inspect functional-421935 --format={{.State.Status}}
I0809 18:47:56.238180  859719 ssh_runner.go:195] Run: systemctl --version
I0809 18:47:56.238241  859719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-421935
I0809 18:47:56.255328  859719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33417 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/functional-421935/id_rsa Username:docker}
I0809 18:47:56.355734  859719 build_images.go:151] Building image from path: /tmp/build.3305501605.tar
I0809 18:47:56.355795  859719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0809 18:47:56.364361  859719 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3305501605.tar
I0809 18:47:56.368003  859719 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3305501605.tar: stat -c "%s %y" /var/lib/minikube/build/build.3305501605.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3305501605.tar': No such file or directory
I0809 18:47:56.368037  859719 ssh_runner.go:362] scp /tmp/build.3305501605.tar --> /var/lib/minikube/build/build.3305501605.tar (3072 bytes)
I0809 18:47:56.392217  859719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3305501605
I0809 18:47:56.401117  859719 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3305501605 -xf /var/lib/minikube/build/build.3305501605.tar
I0809 18:47:56.409884  859719 crio.go:297] Building image: /var/lib/minikube/build/build.3305501605
I0809 18:47:56.409959  859719 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-421935 /var/lib/minikube/build/build.3305501605 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0809 18:47:57.979865  859719 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-421935 /var/lib/minikube/build/build.3305501605 --cgroup-manager=cgroupfs: (1.569875407s)
I0809 18:47:57.979923  859719 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3305501605
I0809 18:47:57.993134  859719 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3305501605.tar
I0809 18:47:58.004701  859719 build_images.go:207] Built localhost/my-image:functional-421935 from /tmp/build.3305501605.tar
I0809 18:47:58.004738  859719 build_images.go:123] succeeded building to: functional-421935
I0809 18:47:58.004743  859719 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.03261566s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-421935
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr: (3.553683465s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-421935 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-421935 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-vvzpz" [c4e48471-4371-45fd-af0d-a0e32f63bf7d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-vvzpz" [c4e48471-4371-45fd-af0d-a0e32f63bf7d] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.009441119s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 854778: os: process already finished
helpers_test.go:508: unable to kill pid 854621: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-421935 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d3486d26-edd6-40f4-a200-cc34be4b86f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d3486d26-edd6-40f4-a200-cc34be4b86f1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.009684974s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr: (2.732150565s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-421935
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 image load --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr: (8.977237885s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image save gcr.io/google-containers/addon-resizer:functional-421935 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image rm gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-421935 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.132236686s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-421935
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 image save --daemon gcr.io/google-containers/addon-resizer:functional-421935 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-421935
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service list -o json
functional_test.go:1493: Took "897.035157ms" to run "out/minikube-linux-amd64 -p functional-421935 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-421935 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.205.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-421935 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32251
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32251
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "305.241204ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "44.256568ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "322.338279ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "50.993604ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdany-port1666440954/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1691606869314689746" to /tmp/TestFunctionalparallelMountCmdany-port1666440954/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1691606869314689746" to /tmp/TestFunctionalparallelMountCmdany-port1666440954/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1691606869314689746" to /tmp/TestFunctionalparallelMountCmdany-port1666440954/001/test-1691606869314689746
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.354916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p"
E0809 18:47:50.165920  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  9 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  9 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  9 18:47 test-1691606869314689746
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh cat /mount-9p/test-1691606869314689746
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-421935 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [caf7b1b1-2d8a-479d-a7ab-a9ebcda78b14] Pending
helpers_test.go:344: "busybox-mount" [caf7b1b1-2d8a-479d-a7ab-a9ebcda78b14] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [caf7b1b1-2d8a-479d-a7ab-a9ebcda78b14] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [caf7b1b1-2d8a-479d-a7ab-a9ebcda78b14] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.020012035s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-421935 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdany-port1666440954/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdspecific-port3216747073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.026261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdspecific-port3216747073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "sudo umount -f /mount-9p": exit status 1 (273.349016ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-421935 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdspecific-port3216747073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T" /mount1: exit status 1 (338.91747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-421935 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-421935 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-421935 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3365728086/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-421935
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-421935
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-421935
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-849795 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0809 18:49:12.086716  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-849795 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m20.126000274s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons enable ingress --alsologtostderr -v=5: (10.80310627s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.80s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-849795 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-809987 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0809 18:53:03.097776  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 18:53:44.059334  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-809987 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.337452865s)
--- PASS: TestJSONOutput/start/Command (67.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-809987 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-809987 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-809987 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-809987 --output=json --user=testUser: (5.719753556s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-477281 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-477281 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.350951ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a91ab212-b03c-41d6-bb38-aa06b7d26008","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-477281] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df158125-b338-4121-b97d-206dd8ed8141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17011"}}
	{"specversion":"1.0","id":"612a2370-a49d-479f-9d9c-3b72da6e3e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05d9e02f-3000-4e32-99d1-5eb76e7b35ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig"}}
	{"specversion":"1.0","id":"9f29a10e-ba7f-4675-afa7-7745443302d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube"}}
	{"specversion":"1.0","id":"68b23322-7253-44ab-8c8d-c84f6739d891","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1dab9cbc-3cae-4274-921d-f9f99b4b2d81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"79812853-348d-46c4-96df-cc48d9cb800d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-477281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-477281
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-800152 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-800152 --network=: (30.45725407s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-800152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-800152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-800152: (1.938954155s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-942108 --network=bridge
E0809 18:54:44.724533  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:44.729871  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:44.740175  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:44.760717  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:44.804519  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:44.885043  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:45.045447  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:45.366020  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:46.006996  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:47.287274  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:49.847546  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:54:54.968781  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 18:55:05.209191  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-942108 --network=bridge: (23.038066729s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-942108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-942108
E0809 18:55:05.980546  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-942108: (1.931638831s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.99s)

                                                
                                    
x
+
TestKicExistingNetwork (24.17s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-392036 --network=existing-network
E0809 18:55:25.690159  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-392036 --network=existing-network: (22.104476237s)
helpers_test.go:175: Cleaning up "existing-network-392036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-392036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-392036: (1.934629351s)
--- PASS: TestKicExistingNetwork (24.17s)

                                                
                                    
x
+
TestKicCustomSubnet (24.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-745648 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-745648 --subnet=192.168.60.0/24: (22.128280735s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-745648 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-745648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-745648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-745648: (2.004259857s)
--- PASS: TestKicCustomSubnet (24.15s)

                                                
                                    
x
+
TestKicStaticIP (23.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-440863 --static-ip=192.168.200.200
E0809 18:56:06.651037  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-440863 --static-ip=192.168.200.200: (21.809990595s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-440863 ip
helpers_test.go:175: Cleaning up "static-ip-440863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-440863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-440863: (2.012502989s)
--- PASS: TestKicStaticIP (23.95s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.94s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-275504 --driver=docker  --container-runtime=crio
E0809 18:56:28.243553  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-275504 --driver=docker  --container-runtime=crio: (23.457834439s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-278246 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-278246 --driver=docker  --container-runtime=crio: (23.442867057s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-275504
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-278246
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-278246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-278246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-278246: (1.870107157s)
helpers_test.go:175: Cleaning up "first-275504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-275504
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-275504: (2.173793196s)
--- PASS: TestMinikubeProfile (51.94s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-819631 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-819631 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.475069193s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-819631 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-838549 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0809 18:57:22.135025  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-838549 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.955282486s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-838549 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-819631 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-819631 --alsologtostderr -v=5: (1.622811554s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-838549 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-838549
E0809 18:57:28.572051  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-838549: (1.191237366s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-838549
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-838549: (5.943932475s)
--- PASS: TestMountStart/serial/RestartStopped (6.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-838549 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814696 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0809 18:57:49.821362  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814696 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m27.857126634s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-814696 -- rollout status deployment/busybox: (2.248585819s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-jxlzc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-814696 -- exec busybox-67b7f59bb-wvdrx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-814696 -v 3 --alsologtostderr
E0809 18:59:44.724083  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-814696 -v 3 --alsologtostderr: (48.402537301s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.01s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp testdata/cp-test.txt multinode-814696:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118717441/001/cp-test_multinode-814696.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696:/home/docker/cp-test.txt multinode-814696-m02:/home/docker/cp-test_multinode-814696_multinode-814696-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test_multinode-814696_multinode-814696-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696:/home/docker/cp-test.txt multinode-814696-m03:/home/docker/cp-test_multinode-814696_multinode-814696-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test_multinode-814696_multinode-814696-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp testdata/cp-test.txt multinode-814696-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118717441/001/cp-test_multinode-814696-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m02:/home/docker/cp-test.txt multinode-814696:/home/docker/cp-test_multinode-814696-m02_multinode-814696.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test_multinode-814696-m02_multinode-814696.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m02:/home/docker/cp-test.txt multinode-814696-m03:/home/docker/cp-test_multinode-814696-m02_multinode-814696-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test_multinode-814696-m02_multinode-814696-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp testdata/cp-test.txt multinode-814696-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1118717441/001/cp-test_multinode-814696-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m03:/home/docker/cp-test.txt multinode-814696:/home/docker/cp-test_multinode-814696-m03_multinode-814696.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696 "sudo cat /home/docker/cp-test_multinode-814696-m03_multinode-814696.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 cp multinode-814696-m03:/home/docker/cp-test.txt multinode-814696-m02:/home/docker/cp-test_multinode-814696-m03_multinode-814696-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 ssh -n multinode-814696-m02 "sudo cat /home/docker/cp-test_multinode-814696-m03_multinode-814696-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 node stop m03
E0809 19:00:12.412249  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-814696 node stop m03: (1.188144884s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814696 status: exit status 7 (455.403669ms)

                                                
                                                
-- stdout --
	multinode-814696
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814696-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814696-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr: exit status 7 (456.640776ms)

                                                
                                                
-- stdout --
	multinode-814696
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814696-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814696-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:00:13.257227  920826 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:00:13.257393  920826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:00:13.257402  920826 out.go:309] Setting ErrFile to fd 2...
	I0809 19:00:13.257406  920826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:00:13.257617  920826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:00:13.257783  920826 out.go:303] Setting JSON to false
	I0809 19:00:13.257824  920826 mustload.go:65] Loading cluster: multinode-814696
	I0809 19:00:13.257917  920826 notify.go:220] Checking for updates...
	I0809 19:00:13.258302  920826 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:00:13.258319  920826 status.go:255] checking status of multinode-814696 ...
	I0809 19:00:13.258709  920826 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 19:00:13.275299  920826 status.go:330] multinode-814696 host status = "Running" (err=<nil>)
	I0809 19:00:13.275331  920826 host.go:66] Checking if "multinode-814696" exists ...
	I0809 19:00:13.275603  920826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696
	I0809 19:00:13.292911  920826 host.go:66] Checking if "multinode-814696" exists ...
	I0809 19:00:13.293181  920826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:00:13.293246  920826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696
	I0809 19:00:13.309289  920826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696/id_rsa Username:docker}
	I0809 19:00:13.400537  920826 ssh_runner.go:195] Run: systemctl --version
	I0809 19:00:13.404374  920826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:00:13.414377  920826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:00:13.469505  920826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-08-09 19:00:13.461190619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:00:13.470070  920826 kubeconfig.go:92] found "multinode-814696" server: "https://192.168.58.2:8443"
	I0809 19:00:13.470095  920826 api_server.go:166] Checking apiserver status ...
	I0809 19:00:13.470146  920826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0809 19:00:13.480311  920826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1442/cgroup
	I0809 19:00:13.488914  920826 api_server.go:182] apiserver freezer: "10:freezer:/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio/crio-f9ce1d33945a4a5f64f8b7193bbd66087605cc89e5f55ea9a262ae5ff752284b"
	I0809 19:00:13.488986  920826 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8ea453b976d14165f5dce4299673a2a7dce0fde1a3e93bb5272c76245cac0de7/crio/crio-f9ce1d33945a4a5f64f8b7193bbd66087605cc89e5f55ea9a262ae5ff752284b/freezer.state
	I0809 19:00:13.496449  920826 api_server.go:204] freezer state: "THAWED"
	I0809 19:00:13.496481  920826 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0809 19:00:13.501664  920826 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0809 19:00:13.501691  920826 status.go:421] multinode-814696 apiserver status = Running (err=<nil>)
	I0809 19:00:13.501704  920826 status.go:257] multinode-814696 status: &{Name:multinode-814696 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0809 19:00:13.501734  920826 status.go:255] checking status of multinode-814696-m02 ...
	I0809 19:00:13.501977  920826 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Status}}
	I0809 19:00:13.518711  920826 status.go:330] multinode-814696-m02 host status = "Running" (err=<nil>)
	I0809 19:00:13.518735  920826 host.go:66] Checking if "multinode-814696-m02" exists ...
	I0809 19:00:13.518986  920826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814696-m02
	I0809 19:00:13.534898  920826 host.go:66] Checking if "multinode-814696-m02" exists ...
	I0809 19:00:13.535145  920826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0809 19:00:13.535188  920826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814696-m02
	I0809 19:00:13.551330  920826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33487 SSHKeyPath:/home/jenkins/minikube-integration/17011-816603/.minikube/machines/multinode-814696-m02/id_rsa Username:docker}
	I0809 19:00:13.644482  920826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0809 19:00:13.654793  920826 status.go:257] multinode-814696-m02 status: &{Name:multinode-814696-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0809 19:00:13.654828  920826 status.go:255] checking status of multinode-814696-m03 ...
	I0809 19:00:13.655083  920826 cli_runner.go:164] Run: docker container inspect multinode-814696-m03 --format={{.State.Status}}
	I0809 19:00:13.671477  920826 status.go:330] multinode-814696-m03 host status = "Stopped" (err=<nil>)
	I0809 19:00:13.671498  920826 status.go:343] host is not running, skipping remaining checks
	I0809 19:00:13.671504  920826 status.go:257] multinode-814696-m03 status: &{Name:multinode-814696-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-814696 node start m03 --alsologtostderr: (9.949216366s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814696
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-814696
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-814696: (24.819177772s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814696 --wait=true -v=8 --alsologtostderr
E0809 19:01:28.240499  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814696 --wait=true -v=8 --alsologtostderr: (1m30.113547307s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814696
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 node delete m03
E0809 19:02:22.133516  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-814696 node delete m03: (4.079884411s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-814696 stop: (23.789327551s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814696 status: exit status 7 (82.36346ms)

                                                
                                                
-- stdout --
	multinode-814696
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814696-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr: exit status 7 (77.648764ms)

                                                
                                                
-- stdout --
	multinode-814696
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814696-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:02:47.905540  930903 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:02:47.905650  930903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:02:47.905658  930903 out.go:309] Setting ErrFile to fd 2...
	I0809 19:02:47.905662  930903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:02:47.905880  930903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:02:47.906047  930903 out.go:303] Setting JSON to false
	I0809 19:02:47.906083  930903 mustload.go:65] Loading cluster: multinode-814696
	I0809 19:02:47.906170  930903 notify.go:220] Checking for updates...
	I0809 19:02:47.906487  930903 config.go:182] Loaded profile config "multinode-814696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:02:47.906502  930903 status.go:255] checking status of multinode-814696 ...
	I0809 19:02:47.906944  930903 cli_runner.go:164] Run: docker container inspect multinode-814696 --format={{.State.Status}}
	I0809 19:02:47.923819  930903 status.go:330] multinode-814696 host status = "Stopped" (err=<nil>)
	I0809 19:02:47.923846  930903 status.go:343] host is not running, skipping remaining checks
	I0809 19:02:47.923851  930903 status.go:257] multinode-814696 status: &{Name:multinode-814696 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0809 19:02:47.923895  930903 status.go:255] checking status of multinode-814696-m02 ...
	I0809 19:02:47.924136  930903 cli_runner.go:164] Run: docker container inspect multinode-814696-m02 --format={{.State.Status}}
	I0809 19:02:47.941383  930903 status.go:330] multinode-814696-m02 host status = "Stopped" (err=<nil>)
	I0809 19:02:47.941436  930903 status.go:343] host is not running, skipping remaining checks
	I0809 19:02:47.941443  930903 status.go:257] multinode-814696-m02 status: &{Name:multinode-814696-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (72.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814696 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0809 19:02:51.288534  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814696 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.775558958s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-814696 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (72.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-814696
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814696-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-814696-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.319168ms)

                                                
                                                
-- stdout --
	* [multinode-814696-m02] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-814696-m02' is duplicated with machine name 'multinode-814696-m02' in profile 'multinode-814696'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-814696-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-814696-m03 --driver=docker  --container-runtime=crio: (23.751623757s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-814696
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-814696: exit status 80 (265.96243ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-814696
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-814696-m03 already exists in multinode-814696-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-814696-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-814696-m03: (1.821580834s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.94s)

                                                
                                    
x
+
TestPreload (125.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-539611 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0809 19:04:44.723091  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-539611 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m12.36124281s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-539611 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-539611 image pull gcr.io/k8s-minikube/busybox: (1.011760291s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-539611
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-539611: (5.655274431s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-539611 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0809 19:06:28.240774  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-539611 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.766844717s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-539611 image list
helpers_test.go:175: Cleaning up "test-preload-539611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-539611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-539611: (2.315484906s)
--- PASS: TestPreload (125.33s)

                                                
                                    
x
+
TestScheduledStopUnix (97.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-646903 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-646903 --memory=2048 --driver=docker  --container-runtime=crio: (22.192723904s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646903 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-646903 -n scheduled-stop-646903
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646903 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646903 --cancel-scheduled
E0809 19:07:22.134289  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646903 -n scheduled-stop-646903
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646903
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646903 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646903
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-646903: exit status 7 (64.985574ms)

                                                
                                                
-- stdout --
	scheduled-stop-646903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646903 -n scheduled-stop-646903
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646903 -n scheduled-stop-646903: exit status 7 (62.048225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-646903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-646903
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-646903: (4.19461438s)
--- PASS: TestScheduledStopUnix (97.65s)

                                                
                                    
x
+
TestInsufficientStorage (12.85s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-829052 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-829052 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.531457226s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d9a09e0-b683-46a1-807b-a191c4c6cea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-829052] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8758803-a3f2-4ded-ac73-5a3e99f70ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17011"}}
	{"specversion":"1.0","id":"c71e2d21-b655-4ee1-be06-b064f90b2012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"30aada2b-edbd-48b2-9fa6-949a4a8dffd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig"}}
	{"specversion":"1.0","id":"dd36849e-0a21-48d4-81be-79a78fb03e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube"}}
	{"specversion":"1.0","id":"ebe59324-593b-4198-948f-a213865ad00a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"565d89e3-9163-4d78-85b7-c15b256aeee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a012ac94-99c2-4b16-8cde-d37e61a776f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"88f0c37f-9492-4c18-bb29-d478ac186296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6d9c6b04-aba4-42ab-84dd-dc5ae73d82bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"592d1d0a-1f3f-48d9-baaf-326febee465f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"45f93e80-1b0b-46f6-9aa4-e49bd768e3b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-829052 in cluster insufficient-storage-829052","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a56e5e0c-d768-4d57-927e-f87f9f0658e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6ba004c-5780-4f49-ade9-478e7bcc3ff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"eac158cf-7806-458e-902d-e41fff326a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-829052 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-829052 --output=json --layout=cluster: exit status 7 (257.375581ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-829052","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-829052","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0809 19:08:25.644460  952283 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-829052" does not appear in /home/jenkins/minikube-integration/17011-816603/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-829052 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-829052 --output=json --layout=cluster: exit status 7 (260.955215ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-829052","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-829052","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0809 19:08:25.906260  952370 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-829052" does not appear in /home/jenkins/minikube-integration/17011-816603/kubeconfig
	E0809 19:08:25.915565  952370 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/insufficient-storage-829052/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-829052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-829052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-829052: (1.803121107s)
--- PASS: TestInsufficientStorage (12.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (357.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.351365934s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-222913
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-222913: (2.35461885s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-222913 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-222913 status --format={{.Host}}: exit status 7 (76.519457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0809 19:11:07.772933  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.987920977s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-222913 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (86.170684ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-222913] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-222913
	    minikube start -p kubernetes-upgrade-222913 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2229132 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-222913 --kubernetes-version=v1.28.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-222913 --memory=2200 --kubernetes-version=v1.28.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.888390221s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-222913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-222913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-222913: (2.243776718s)
--- PASS: TestKubernetesUpgrade (357.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.2616701382.exe start -p missing-upgrade-980585 --memory=2200 --driver=docker  --container-runtime=crio
E0809 19:08:45.182272  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.2616701382.exe start -p missing-upgrade-980585 --memory=2200 --driver=docker  --container-runtime=crio: (1m22.735869881s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-980585
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-980585
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-980585 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-980585 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.013303067s)
helpers_test.go:175: Cleaning up "missing-upgrade-980585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-980585
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-980585: (2.082982456s)
--- PASS: TestMissingContainerUpgrade (149.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.659081ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-992843] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992843 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992843 --driver=docker  --container-runtime=crio: (38.998252444s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-992843 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --driver=docker  --container-runtime=crio: (11.840086389s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-992843 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-992843 status -o json: exit status 2 (451.5567ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-992843","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-992843
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-992843: (2.050711894s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992843 --no-kubernetes --driver=docker  --container-runtime=crio: (8.505527293s)
--- PASS: TestNoKubernetes/serial/Start (8.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-992843 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-992843 "sudo systemctl is-active --quiet service kubelet": exit status 1 (297.003026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.768983173s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-992843
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-992843: (1.255298208s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-992843 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-992843 --driver=docker  --container-runtime=crio: (7.365155034s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-992843 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-992843 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.230141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-393336 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-393336 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (188.989302ms)

                                                
                                                
-- stdout --
	* [false-393336] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17011
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0809 19:09:50.452441  979337 out.go:296] Setting OutFile to fd 1 ...
	I0809 19:09:50.452591  979337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:09:50.452603  979337 out.go:309] Setting ErrFile to fd 2...
	I0809 19:09:50.452610  979337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0809 19:09:50.452840  979337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17011-816603/.minikube/bin
	I0809 19:09:50.453423  979337 out.go:303] Setting JSON to false
	I0809 19:09:50.454994  979337 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10346,"bootTime":1691597845,"procs":485,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0809 19:09:50.455119  979337 start.go:138] virtualization: kvm guest
	I0809 19:09:50.457475  979337 out.go:177] * [false-393336] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0809 19:09:50.459586  979337 out.go:177]   - MINIKUBE_LOCATION=17011
	I0809 19:09:50.459634  979337 notify.go:220] Checking for updates...
	I0809 19:09:50.461468  979337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0809 19:09:50.463344  979337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17011-816603/kubeconfig
	I0809 19:09:50.465095  979337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17011-816603/.minikube
	I0809 19:09:50.467783  979337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0809 19:09:50.469352  979337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0809 19:09:50.474423  979337 config.go:182] Loaded profile config "cert-expiration-023346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:09:50.474536  979337 config.go:182] Loaded profile config "cert-options-425713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0809 19:09:50.474604  979337 config.go:182] Loaded profile config "missing-upgrade-980585": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0809 19:09:50.474694  979337 driver.go:373] Setting default libvirt URI to qemu:///system
	I0809 19:09:50.507794  979337 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0809 19:09:50.507921  979337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0809 19:09:50.581518  979337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:93 SystemTime:2023-08-09 19:09:50.565159037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0809 19:09:50.581620  979337 docker.go:294] overlay module found
	I0809 19:09:50.583634  979337 out.go:177] * Using the docker driver based on user configuration
	I0809 19:09:50.585172  979337 start.go:298] selected driver: docker
	I0809 19:09:50.585200  979337 start.go:901] validating driver "docker" against <nil>
	I0809 19:09:50.585218  979337 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0809 19:09:50.589036  979337 out.go:177] 
	W0809 19:09:50.590407  979337 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0809 19:09:50.591776  979337 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-393336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt
server: https://127.0.0.1:33563
name: missing-upgrade-980585
contexts:
- context:
cluster: missing-upgrade-980585
user: missing-upgrade-980585
name: missing-upgrade-980585
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-980585
user:
client-certificate: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.crt
client-key: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-393336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-393336"

                                                
                                                
----------------------- debugLogs end: false-393336 [took: 3.500549903s] --------------------------------
helpers_test.go:175: Cleaning up "false-393336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-393336
--- PASS: TestNetworkPlugins/group/false (3.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestPause/serial/Start (76.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-734678 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0809 19:11:28.240906  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-734678 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.156668228s)
--- PASS: TestPause/serial/Start (76.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-321125
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0809 19:12:22.134192  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.082600169s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vwqvd" [094e3078-49eb-4aeb-b991-cf1ac7420d42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vwqvd" [094e3078-49eb-4aeb-b991-cf1ac7420d42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.009850723s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m14.58061451s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.293525615s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.006885112s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-7cntp" [03e35d3d-03e2-40e5-8ad5-2ea0ce016f5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-7cntp" [03e35d3d-03e2-40e5-8ad5-2ea0ce016f5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.009102922s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fpzvk" [4f27cbe5-2789-42d6-9796-0f119f2bfb30] Running
E0809 19:14:44.723759  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023224784s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6tdxh" [5a31c433-bb9e-43d1-a590-61fe15d76e2d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021219487s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5xt5n" [8df11388-70f9-4536-a73d-5f5d7aa527a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-5xt5n" [8df11388-70f9-4536-a73d-5f5d7aa527a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.009529822s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-xw8z6" [caf0afdf-07df-4536-b681-a09b56bbf7f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-xw8z6" [caf0afdf-07df-4536-b681-a09b56bbf7f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008742176s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.171611446s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.517922407s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-393336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (40.52609019s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-604959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-604959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m13.99738471s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zbhd8" [6877ceb7-4ac6-4a60-8d26-bfd78f2fb5d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-zbhd8" [6877ceb7-4ac6-4a60-8d26-bfd78f2fb5d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.010691758s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dwppd" [cebffe93-1bc3-4a2b-ba90-beebf1af1013] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019604869s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rwnbf" [7cdb579e-74ce-42c8-aa76-4f303abf2825] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0809 19:16:28.240386  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-rwnbf" [7cdb579e-74ce-42c8-aa76-4f303abf2825] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.009671022s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-393336 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-393336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hkrxc" [7f911136-f8f0-434a-a4b7-1fd2d59ba29a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hkrxc" [7f911136-f8f0-434a-a4b7-1fd2d59ba29a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.008769299s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-204240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-204240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (1m1.531088634s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-393336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-393336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (73.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-563480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-563480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m13.333739461s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (73.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0809 19:17:22.133880  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m11.673492234s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-204240 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d08eb7f-c788-4771-aa0f-0665c3da8ac1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d08eb7f-c788-4771-aa0f-0665c3da8ac1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016790592s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-204240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-204240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-204240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-204240 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-204240 --alsologtostderr -v=3: (11.923704603s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-204240 -n no-preload-204240
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-204240 -n no-preload-204240: exit status 7 (65.257863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-204240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-204240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-204240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (5m40.487916246s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-204240 -n no-preload-204240
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-563480 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cc9bd970-e7d6-4b83-846c-9a2685cb7cb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cc9bd970-e7d6-4b83-846c-9a2685cb7cb4] Running
E0809 19:18:14.169059  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.174372  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.185102  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.206196  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.246543  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.327532  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.487972  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:14.808313  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:15.448872  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:16.730011  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:18:19.290767  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.017476008s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-563480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-563480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-563480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025240409s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-563480 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-563480 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-563480 --alsologtostderr -v=3: (12.008377103s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-604959 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [be7d2e32-a8c2-4485-ade2-6746ced6664f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [be7d2e32-a8c2-4485-ade2-6746ced6664f] Running
E0809 19:18:24.411041  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.014130439s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-604959 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612475 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c93b3f6d-e881-4b6a-96f1-044283fb0cc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c93b3f6d-e881-4b6a-96f1-044283fb0cc8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.018702953s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-612475 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-604959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-604959 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-604959 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-604959 --alsologtostderr -v=3: (12.011621755s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-612475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-612475 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-612475 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-612475 --alsologtostderr -v=3: (12.05198985s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563480 -n embed-certs-563480
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563480 -n embed-certs-563480: exit status 7 (102.008094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-563480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-563480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0809 19:18:34.651507  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-563480 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m42.678911997s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-563480 -n embed-certs-563480
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-604959 -n old-k8s-version-604959
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-604959 -n old-k8s-version-604959: exit status 7 (62.503779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-604959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (430.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-604959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-604959 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m9.743706351s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-604959 -n old-k8s-version-604959
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (430.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475: exit status 7 (67.571025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-612475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-612475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0809 19:18:55.132483  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:19:31.289734  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 19:19:36.092834  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:19:40.715838  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:40.721130  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:40.731404  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:40.751771  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:40.792101  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:40.872433  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:41.032902  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:41.353401  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:41.993738  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:43.274151  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:43.954768  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:43.960021  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:43.970290  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:43.990573  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:44.030882  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:44.111237  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:44.271680  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:44.592008  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:44.723475  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
E0809 19:19:45.232857  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:45.834440  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:46.513289  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:47.366416  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.371687  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.381981  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.402326  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.442636  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.522969  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:47.683394  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:48.004223  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:48.645277  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:49.073448  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:49.925892  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:50.955495  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:19:52.486867  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:19:54.194555  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:19:57.607521  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:20:01.195987  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:20:04.434703  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:20:07.848522  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:20:21.676851  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:20:24.915766  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:20:28.328958  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:20:58.013790  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:21:02.637016  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:21:05.876477  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:21:07.829105  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:07.835268  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:07.845526  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:07.865791  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:07.906090  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:07.986402  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:08.146633  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:08.467091  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:09.107607  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:09.289996  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:21:10.388456  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:12.948901  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:18.069602  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:18.959795  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:18.965067  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:18.975320  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:18.995596  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:19.035937  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:19.116282  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:19.276667  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:19.597223  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:20.237800  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:21.518105  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:24.079070  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:28.240393  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/addons-922218/client.crt: no such file or directory
E0809 19:21:28.310695  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:29.200031  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:34.719168  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:34.724389  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:34.734642  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:34.754920  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:34.795181  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:34.875569  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:35.035986  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:35.356820  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:35.996970  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:37.277741  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:39.440878  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:21:39.838344  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:44.958869  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:48.791417  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:21:55.199249  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:21:59.921988  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:22:15.680135  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:22:22.133517  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/functional-421935/client.crt: no such file or directory
E0809 19:22:24.557641  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
E0809 19:22:27.797102  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
E0809 19:22:29.751794  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
E0809 19:22:31.211197  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/calico-393336/client.crt: no such file or directory
E0809 19:22:40.882255  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/flannel-393336/client.crt: no such file or directory
E0809 19:22:56.640670  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
E0809 19:23:14.169470  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
E0809 19:23:41.854756  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/auto-393336/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-612475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m41.696211232s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2h4dc" [ca55739d-e9fe-4eb6-b011-2ba393d4c1a6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2h4dc" [ca55739d-e9fe-4eb6-b011-2ba393d4c1a6] Running
E0809 19:23:51.673015  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/bridge-393336/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.017914351s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2h4dc" [ca55739d-e9fe-4eb6-b011-2ba393d4c1a6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009699811s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-204240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-204240 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-204240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-204240 -n no-preload-204240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-204240 -n no-preload-204240: exit status 2 (308.497688ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-204240 -n no-preload-204240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-204240 -n no-preload-204240: exit status 2 (310.7545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-204240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-204240 -n no-preload-204240
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-204240 -n no-preload-204240
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-832533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-832533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (40.293234165s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4sqfb" [ee515def-d92e-4770-9e3d-42289c34b545] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4sqfb" [ee515def-d92e-4770-9e3d-42289c34b545] Running
E0809 19:24:18.561006  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/enable-default-cni-393336/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.019479596s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4sqfb" [ee515def-d92e-4770-9e3d-42289c34b545] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00946678s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-563480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-b9cqf" [883b5c97-5230-4bc7-882b-19b2e099d9ba] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-b9cqf" [883b5c97-5230-4bc7-882b-19b2e099d9ba] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.060029736s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-563480 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-563480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563480 -n embed-certs-563480
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563480 -n embed-certs-563480: exit status 2 (364.898887ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-563480 -n embed-certs-563480
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-563480 -n embed-certs-563480: exit status 2 (332.689409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-563480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-563480 -n embed-certs-563480
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-563480 -n embed-certs-563480
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-b9cqf" [883b5c97-5230-4bc7-882b-19b2e099d9ba] Running
E0809 19:24:40.715708  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/custom-flannel-393336/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010088616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-612475 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-612475 "sudo crictl images -o json"
E0809 19:24:43.955054  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/kindnet-393336/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-612475 --alsologtostderr -v=1
E0809 19:24:44.723039  823434 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/ingress-addon-legacy-849795/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475: exit status 2 (325.647221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475: exit status 2 (308.570701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-612475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-612475 -n default-k8s-diff-port-612475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-832533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-832533 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-832533 --alsologtostderr -v=3: (2.072809033s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832533 -n newest-cni-832533
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832533 -n newest-cni-832533: exit status 7 (75.815549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-832533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-832533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-832533 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.0: (25.962779691s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-832533 -n newest-cni-832533
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-832533 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-832533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832533 -n newest-cni-832533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832533 -n newest-cni-832533: exit status 2 (287.811982ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-832533 -n newest-cni-832533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-832533 -n newest-cni-832533: exit status 2 (294.894819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-832533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-832533 -n newest-cni-832533
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-832533 -n newest-cni-832533
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vppgf" [70ace3be-08be-4acd-8be5-2745714ce31f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015865246s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vppgf" [70ace3be-08be-4acd-8be5-2745714ce31f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008191786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-604959 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-604959 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-604959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-604959 -n old-k8s-version-604959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-604959 -n old-k8s-version-604959: exit status 2 (281.524554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-604959 -n old-k8s-version-604959
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-604959 -n old-k8s-version-604959: exit status 2 (284.119768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-604959 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-604959 -n old-k8s-version-604959
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-604959 -n old-k8s-version-604959
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                    

Test skip (27/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-393336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt
server: https://127.0.0.1:33563
name: missing-upgrade-980585
contexts:
- context:
cluster: missing-upgrade-980585
user: missing-upgrade-980585
name: missing-upgrade-980585
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-980585
user:
client-certificate: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.crt
client-key: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-393336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-393336"

                                                
                                                
----------------------- debugLogs end: kubenet-393336 [took: 3.557529303s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-393336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-393336
--- SKIP: TestNetworkPlugins/group/kubenet (3.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-393336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-393336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17011-816603/.minikube/ca.crt
server: https://127.0.0.1:33563
name: missing-upgrade-980585
contexts:
- context:
cluster: missing-upgrade-980585
user: missing-upgrade-980585
name: missing-upgrade-980585
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-980585
user:
client-certificate: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.crt
client-key: /home/jenkins/minikube-integration/17011-816603/.minikube/profiles/missing-upgrade-980585/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-393336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-393336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-393336"

                                                
                                                
----------------------- debugLogs end: cilium-393336 [took: 3.499066705s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-393336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-393336
--- SKIP: TestNetworkPlugins/group/cilium (3.64s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-891103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-891103
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard