Test Report: Docker_Linux_crio 16865

                    
                      e527c943862622d235c52d3f78f307a89288bf9f:2023-08-17:30622
                    
                

Test fail (6/310)

Order failed test Duration
32 TestAddons/parallel/Ingress 152.11
148 TestFunctional/parallel/ImageCommands/ImageRemove 2.48
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.74
211 TestMultiNode/serial/PingHostFrom2Pods 2.9
232 TestRunningBinaryUpgrade 71.82
240 TestStoppedBinaryUpgrade/Upgrade 100.58
x
+
TestAddons/parallel/Ingress (152.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-418182 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-418182 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-418182 replace --force -f testdata/nginx-pod-svc.yaml
2023/08/17 21:13:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [226a3962-879c-457f-aad0-8e9fd65682c9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [226a3962-879c-457f-aad0-8e9fd65682c9] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008658044s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418182 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.699676516s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-418182 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-418182 addons disable ingress-dns --alsologtostderr -v=1: (1.314909367s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-418182 addons disable ingress --alsologtostderr -v=1: (7.602281522s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-418182
helpers_test.go:235: (dbg) docker inspect addons-418182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e",
	        "Created": "2023-08-17T21:11:13.203718641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:11:13.490854016Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e/hosts",
	        "LogPath": "/var/lib/docker/containers/9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e/9af52fc4f93f5d646344dda961f551a54d307c509b2028de101bb6cbaceab45e-json.log",
	        "Name": "/addons-418182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-418182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-418182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7eafbf2ca60c3fb7688e9c57e16c327ba52093201a77f813ad4cb5a709900791-init/diff:/var/lib/docker/overlay2/4fa4181e3bc5ec3351265343644d26aad7e77680fc05db63fc4bb2710b90d29d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7eafbf2ca60c3fb7688e9c57e16c327ba52093201a77f813ad4cb5a709900791/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7eafbf2ca60c3fb7688e9c57e16c327ba52093201a77f813ad4cb5a709900791/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7eafbf2ca60c3fb7688e9c57e16c327ba52093201a77f813ad4cb5a709900791/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-418182",
	                "Source": "/var/lib/docker/volumes/addons-418182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-418182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-418182",
	                "name.minikube.sigs.k8s.io": "addons-418182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e756f11d7fafb951c87645039b26cca45a067c661a09c43b28a947f14fc7c338",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e756f11d7faf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-418182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9af52fc4f93f",
	                        "addons-418182"
	                    ],
	                    "NetworkID": "ed626f3512ae79e68fc75153550f06eeebe3b4f36342d89dca709adb3a3c3478",
	                    "EndpointID": "aaee80d0261bd007b91d8f734e316cc44c6d42da813c65b226cb3bce8db0a648",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-418182 -n addons-418182
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-418182 logs -n 25: (1.108968791s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-538116   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-538116           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-538116   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-538116           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-538116   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | -p download-only-538116           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| delete  | -p download-only-538116           | download-only-538116   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| delete  | -p download-only-538116           | download-only-538116   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| start   | --download-only -p                | download-docker-218206 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | download-docker-218206            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p download-docker-218206         | download-docker-218206 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| start   | --download-only -p                | binary-mirror-960837   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |                     |
	|         | binary-mirror-960837              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40255            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-960837           | binary-mirror-960837   | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:10 UTC |
	| start   | -p addons-418182                  | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC | 17 Aug 23 21:12 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	|         | --addons=helm-tiller              |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	|         | -p addons-418182                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-418182 addons              | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:12 UTC | 17 Aug 23 21:12 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-418182 addons disable      | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | helm-tiller --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | addons-418182                     |                        |         |         |                     |                     |
	| ip      | addons-418182 ip                  | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	| addons  | addons-418182 addons disable      | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | addons-418182                     |                        |         |         |                     |                     |
	| ssh     | addons-418182 ssh curl -s         | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| addons  | addons-418182 addons              | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-418182 addons              | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:13 UTC | 17 Aug 23 21:13 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-418182 ip                  | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	| addons  | addons-418182 addons disable      | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-418182 addons disable      | addons-418182          | jenkins | v1.31.2 | 17 Aug 23 21:15 UTC | 17 Aug 23 21:15 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:51.588698   18563 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:51.588891   18563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:51.588900   18563 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:51.588907   18563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:51.589120   18563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:10:51.589682   18563 out.go:303] Setting JSON to false
	I0817 21:10:51.590490   18563 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3200,"bootTime":1692303452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:51.590553   18563 start.go:138] virtualization: kvm guest
	I0817 21:10:51.592960   18563 out.go:177] * [addons-418182] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:51.594464   18563 notify.go:220] Checking for updates...
	I0817 21:10:51.594491   18563 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:10:51.595823   18563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:51.597093   18563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:10:51.598534   18563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:10:51.600064   18563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:10:51.601689   18563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:10:51.603494   18563 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:10:51.623264   18563 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:10:51.623357   18563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:51.672286   18563 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:51.664222728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:51.672385   18563 docker.go:294] overlay module found
	I0817 21:10:51.674202   18563 out.go:177] * Using the docker driver based on user configuration
	I0817 21:10:51.675473   18563 start.go:298] selected driver: docker
	I0817 21:10:51.675483   18563 start.go:902] validating driver "docker" against <nil>
	I0817 21:10:51.675493   18563 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:10:51.676227   18563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:51.726513   18563 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:51.718759622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:51.726645   18563 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:10:51.726823   18563 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:10:51.728517   18563 out.go:177] * Using Docker driver with root privileges
	I0817 21:10:51.729976   18563 cni.go:84] Creating CNI manager for ""
	I0817 21:10:51.729997   18563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:10:51.730007   18563 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:10:51.730017   18563 start_flags.go:319] config:
	{Name:addons-418182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-418182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:51.731742   18563 out.go:177] * Starting control plane node addons-418182 in cluster addons-418182
	I0817 21:10:51.733158   18563 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:10:51.734840   18563 out.go:177] * Pulling base image ...
	I0817 21:10:51.736133   18563 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:10:51.736161   18563 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:51.736169   18563 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:51.736179   18563 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:10:51.736242   18563 preload.go:174] Found /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:10:51.736252   18563 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:10:51.736549   18563 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/config.json ...
	I0817 21:10:51.736571   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/config.json: {Name:mk254ad34da9e84b9f32fcc7c7d382f20fce3383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:10:51.750610   18563 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:51.750706   18563 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:10:51.750721   18563 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:10:51.750724   18563 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:10:51.750731   18563 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:10:51.750738   18563 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0817 21:11:02.883775   18563 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0817 21:11:02.883813   18563 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:11:02.883871   18563 start.go:365] acquiring machines lock for addons-418182: {Name:mk64890e6c7f1530440a4fddf48c0e21211aa662 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:11:02.883973   18563 start.go:369] acquired machines lock for "addons-418182" in 81.885µs
	I0817 21:11:02.884002   18563 start.go:93] Provisioning new machine with config: &{Name:addons-418182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-418182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:11:02.884098   18563 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:11:02.885842   18563 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0817 21:11:02.886070   18563 start.go:159] libmachine.API.Create for "addons-418182" (driver="docker")
	I0817 21:11:02.886094   18563 client.go:168] LocalClient.Create starting
	I0817 21:11:02.886203   18563 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem
	I0817 21:11:03.078611   18563 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem
	I0817 21:11:03.168481   18563 cli_runner.go:164] Run: docker network inspect addons-418182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:11:03.183427   18563 cli_runner.go:211] docker network inspect addons-418182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:11:03.183510   18563 network_create.go:281] running [docker network inspect addons-418182] to gather additional debugging logs...
	I0817 21:11:03.183532   18563 cli_runner.go:164] Run: docker network inspect addons-418182
	W0817 21:11:03.197845   18563 cli_runner.go:211] docker network inspect addons-418182 returned with exit code 1
	I0817 21:11:03.197872   18563 network_create.go:284] error running [docker network inspect addons-418182]: docker network inspect addons-418182: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-418182 not found
	I0817 21:11:03.197889   18563 network_create.go:286] output of [docker network inspect addons-418182]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-418182 not found
	
	** /stderr **
	I0817 21:11:03.197957   18563 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:11:03.212088   18563 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00065fd30}
	I0817 21:11:03.212129   18563 network_create.go:123] attempt to create docker network addons-418182 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 21:11:03.212183   18563 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-418182 addons-418182
	I0817 21:11:03.261641   18563 network_create.go:107] docker network addons-418182 192.168.49.0/24 created
	I0817 21:11:03.261681   18563 kic.go:117] calculated static IP "192.168.49.2" for the "addons-418182" container
	I0817 21:11:03.261744   18563 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:11:03.275817   18563 cli_runner.go:164] Run: docker volume create addons-418182 --label name.minikube.sigs.k8s.io=addons-418182 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:11:03.290633   18563 oci.go:103] Successfully created a docker volume addons-418182
	I0817 21:11:03.290702   18563 cli_runner.go:164] Run: docker run --rm --name addons-418182-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418182 --entrypoint /usr/bin/test -v addons-418182:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:11:08.240304   18563 cli_runner.go:217] Completed: docker run --rm --name addons-418182-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418182 --entrypoint /usr/bin/test -v addons-418182:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (4.949534027s)
	I0817 21:11:08.240339   18563 oci.go:107] Successfully prepared a docker volume addons-418182
	I0817 21:11:08.240356   18563 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:11:08.240374   18563 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:11:08.240438   18563 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418182:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:11:13.140117   18563 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418182:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.899614242s)
	I0817 21:11:13.140145   18563 kic.go:199] duration metric: took 4.899769 seconds to extract preloaded images to volume
	W0817 21:11:13.140279   18563 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:11:13.140389   18563 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:11:13.189735   18563 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-418182 --name addons-418182 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418182 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-418182 --network addons-418182 --ip 192.168.49.2 --volume addons-418182:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:11:13.498793   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Running}}
	I0817 21:11:13.515763   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:13.532284   18563 cli_runner.go:164] Run: docker exec addons-418182 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:11:13.572191   18563 oci.go:144] the created container "addons-418182" has a running status.
	I0817 21:11:13.572225   18563 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa...
	I0817 21:11:13.636455   18563 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:11:13.656572   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:13.672497   18563 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:11:13.672515   18563 kic_runner.go:114] Args: [docker exec --privileged addons-418182 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:11:13.730757   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:13.746882   18563 machine.go:88] provisioning docker machine ...
	I0817 21:11:13.746914   18563 ubuntu.go:169] provisioning hostname "addons-418182"
	I0817 21:11:13.746980   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:13.763175   18563 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:13.763678   18563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0817 21:11:13.763697   18563 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-418182 && echo "addons-418182" | sudo tee /etc/hostname
	I0817 21:11:13.764999   18563 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:32772: read: connection reset by peer
	I0817 21:11:16.899707   18563 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-418182
	
	I0817 21:11:16.899789   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:16.915255   18563 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:16.915667   18563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0817 21:11:16.915692   18563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-418182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-418182/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-418182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:11:17.037533   18563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:11:17.037564   18563 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:11:17.037603   18563 ubuntu.go:177] setting up certificates
	I0817 21:11:17.037616   18563 provision.go:83] configureAuth start
	I0817 21:11:17.037671   18563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418182
	I0817 21:11:17.052976   18563 provision.go:138] copyHostCerts
	I0817 21:11:17.053045   18563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:11:17.053155   18563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:11:17.053227   18563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:11:17.053274   18563 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.addons-418182 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-418182]
	I0817 21:11:17.207459   18563 provision.go:172] copyRemoteCerts
	I0817 21:11:17.207512   18563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:11:17.207548   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.226165   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:17.313384   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:11:17.332669   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0817 21:11:17.351325   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:11:17.369860   18563 provision.go:86] duration metric: configureAuth took 332.230485ms
	I0817 21:11:17.369888   18563 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:11:17.370085   18563 config.go:182] Loaded profile config "addons-418182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:17.370174   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.386893   18563 main.go:141] libmachine: Using SSH client type: native
	I0817 21:11:17.387288   18563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0817 21:11:17.387311   18563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:11:17.592933   18563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:11:17.592956   18563 machine.go:91] provisioned docker machine in 3.846053851s
	I0817 21:11:17.592967   18563 client.go:171] LocalClient.Create took 14.706866355s
	I0817 21:11:17.592981   18563 start.go:167] duration metric: libmachine.API.Create for "addons-418182" took 14.70691009s
	I0817 21:11:17.592990   18563 start.go:300] post-start starting for "addons-418182" (driver="docker")
	I0817 21:11:17.593000   18563 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:11:17.593055   18563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:11:17.593100   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.608905   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:17.697749   18563 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:11:17.700391   18563 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:11:17.700435   18563 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:11:17.700447   18563 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:11:17.700456   18563 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:11:17.700469   18563 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:11:17.700528   18563 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:11:17.700561   18563 start.go:303] post-start completed in 107.564189ms
	I0817 21:11:17.700841   18563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418182
	I0817 21:11:17.716526   18563 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/config.json ...
	I0817 21:11:17.716793   18563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:11:17.716850   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.732561   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:17.818272   18563 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:11:17.822001   18563 start.go:128] duration metric: createHost completed in 14.937889642s
	I0817 21:11:17.822022   18563 start.go:83] releasing machines lock for "addons-418182", held for 14.938037443s
	I0817 21:11:17.822087   18563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418182
	I0817 21:11:17.837886   18563 ssh_runner.go:195] Run: cat /version.json
	I0817 21:11:17.837954   18563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:11:17.838008   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.838013   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:17.855081   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:17.855213   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:18.040751   18563 ssh_runner.go:195] Run: systemctl --version
	I0817 21:11:18.044610   18563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:11:18.180551   18563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:11:18.184475   18563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:11:18.200507   18563 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:11:18.200573   18563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:11:18.224893   18563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:11:18.224910   18563 start.go:466] detecting cgroup driver to use...
	I0817 21:11:18.224935   18563 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:11:18.224966   18563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:11:18.237291   18563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:11:18.246092   18563 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:11:18.246129   18563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:11:18.256788   18563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:11:18.268210   18563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:11:18.330451   18563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:11:18.405420   18563 docker.go:212] disabling docker service ...
	I0817 21:11:18.405482   18563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:11:18.421398   18563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:11:18.430688   18563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:11:18.501950   18563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:11:18.578287   18563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:11:18.587915   18563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:11:18.601636   18563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:11:18.601685   18563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:18.609958   18563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:11:18.610024   18563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:18.617888   18563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:18.625606   18563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:11:18.633559   18563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:11:18.640828   18563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:11:18.647423   18563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:11:18.654041   18563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:11:18.725684   18563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:11:18.820040   18563 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:11:18.820114   18563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:11:18.823224   18563 start.go:534] Will wait 60s for crictl version
	I0817 21:11:18.823268   18563 ssh_runner.go:195] Run: which crictl
	I0817 21:11:18.826025   18563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:11:18.856577   18563 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0817 21:11:18.856676   18563 ssh_runner.go:195] Run: crio --version
	I0817 21:11:18.888190   18563 ssh_runner.go:195] Run: crio --version
	I0817 21:11:18.922507   18563 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0817 21:11:18.923965   18563 cli_runner.go:164] Run: docker network inspect addons-418182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:11:18.939321   18563 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:11:18.942476   18563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:11:18.951589   18563 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:11:18.951637   18563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:11:18.996816   18563 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:11:18.996837   18563 crio.go:415] Images already preloaded, skipping extraction
	I0817 21:11:18.996874   18563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:11:19.026812   18563 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:11:19.026832   18563 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:11:19.026884   18563 ssh_runner.go:195] Run: crio config
	I0817 21:11:19.065414   18563 cni.go:84] Creating CNI manager for ""
	I0817 21:11:19.065433   18563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:11:19.065450   18563 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:11:19.065471   18563 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-418182 NodeName:addons-418182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:11:19.065583   18563 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-418182"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:11:19.065652   18563 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-418182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-418182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:11:19.065697   18563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:11:19.074017   18563 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:11:19.074079   18563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:11:19.081209   18563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0817 21:11:19.096342   18563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:11:19.111903   18563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0817 21:11:19.126620   18563 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:11:19.129446   18563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:11:19.138271   18563 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182 for IP: 192.168.49.2
	I0817 21:11:19.138303   18563 certs.go:190] acquiring lock for shared ca certs: {Name:mkccb042866dbfd72de305663f91f6bc6da7b7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.138422   18563 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key
	I0817 21:11:19.296332   18563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt ...
	I0817 21:11:19.296361   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt: {Name:mkf7c2aace28da06343ca0043e48b42db5aac975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.296518   18563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key ...
	I0817 21:11:19.296528   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key: {Name:mk8da3c9e500fbc58670c8916533b47ed564f758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.296599   18563 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key
	I0817 21:11:19.479101   18563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt ...
	I0817 21:11:19.479131   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt: {Name:mk833ccbd997d4af72677a1baef2f8a7056e8fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.479318   18563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key ...
	I0817 21:11:19.479337   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key: {Name:mk8a6acb46038bc0908852ad04c8ecb085f8957c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.479465   18563 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.key
	I0817 21:11:19.479483   18563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt with IP's: []
	I0817 21:11:19.631151   18563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt ...
	I0817 21:11:19.631183   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: {Name:mk7931afd96a0326ebecdc510065d497607ce58e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.631363   18563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.key ...
	I0817 21:11:19.631379   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.key: {Name:mk59c9295d23e4b54eb59746f0db3190e46da6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.631468   18563 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key.dd3b5fb2
	I0817 21:11:19.631489   18563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:11:19.797805   18563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt.dd3b5fb2 ...
	I0817 21:11:19.797836   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt.dd3b5fb2: {Name:mke717b438fb17077211e8681438fc0f82f822c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.798042   18563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key.dd3b5fb2 ...
	I0817 21:11:19.798059   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key.dd3b5fb2: {Name:mka6409ea65a7caf4623bdb662328f69584d82bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.798153   18563 certs.go:337] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt
	I0817 21:11:19.798239   18563 certs.go:341] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key
	I0817 21:11:19.798295   18563 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.key
	I0817 21:11:19.798316   18563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.crt with IP's: []
	I0817 21:11:19.902673   18563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.crt ...
	I0817 21:11:19.902701   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.crt: {Name:mk5cb8485fcf759a9619ef9d90fd82895e448241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.902868   18563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.key ...
	I0817 21:11:19.902886   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.key: {Name:mkd8b308a33194fe4b4fd7acd935ca9845f4b257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:19.903093   18563 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:11:19.903137   18563 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:11:19.903173   18563 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:11:19.903201   18563 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem (1679 bytes)
	I0817 21:11:19.903786   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:11:19.924271   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:11:19.943582   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:11:19.962616   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:11:19.981800   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:11:20.000950   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 21:11:20.020358   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:11:20.039680   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 21:11:20.058753   18563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:11:20.078441   18563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:11:20.092812   18563 ssh_runner.go:195] Run: openssl version
	I0817 21:11:20.097574   18563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:11:20.105489   18563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:20.108311   18563 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:20.108349   18563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:11:20.114408   18563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:11:20.121958   18563 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:11:20.124613   18563 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:11:20.124654   18563 kubeadm.go:404] StartCluster: {Name:addons-418182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-418182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:11:20.124721   18563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:11:20.124751   18563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:11:20.155670   18563 cri.go:89] found id: ""
	I0817 21:11:20.155718   18563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:11:20.162965   18563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:11:20.170068   18563 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0817 21:11:20.170129   18563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:11:20.177065   18563 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:11:20.177110   18563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 21:11:20.249247   18563 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0817 21:11:20.306037   18563 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:11:29.333666   18563 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 21:11:29.333751   18563 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:11:29.333868   18563 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:11:29.333971   18563 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0817 21:11:29.334062   18563 kubeadm.go:322] OS: Linux
	I0817 21:11:29.334126   18563 kubeadm.go:322] CGROUPS_CPU: enabled
	I0817 21:11:29.334185   18563 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0817 21:11:29.334249   18563 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0817 21:11:29.334305   18563 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0817 21:11:29.334348   18563 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0817 21:11:29.334430   18563 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0817 21:11:29.334498   18563 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0817 21:11:29.334567   18563 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0817 21:11:29.334626   18563 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0817 21:11:29.334723   18563 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:11:29.334803   18563 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:11:29.334881   18563 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:11:29.334937   18563 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:11:29.336286   18563 out.go:204]   - Generating certificates and keys ...
	I0817 21:11:29.336346   18563 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:11:29.336399   18563 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:11:29.336451   18563 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:11:29.336506   18563 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:11:29.336580   18563 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:11:29.336625   18563 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:11:29.336671   18563 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:11:29.336772   18563 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-418182 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:11:29.336821   18563 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:11:29.336914   18563 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-418182 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:11:29.337031   18563 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:11:29.337119   18563 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:11:29.337185   18563 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:11:29.337259   18563 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:11:29.337332   18563 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:11:29.337403   18563 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:11:29.337492   18563 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:11:29.337542   18563 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:11:29.337631   18563 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:11:29.337705   18563 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:11:29.337738   18563 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:11:29.337793   18563 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:11:29.339399   18563 out.go:204]   - Booting up control plane ...
	I0817 21:11:29.339471   18563 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:11:29.339538   18563 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:11:29.339599   18563 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:11:29.339672   18563 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:11:29.339805   18563 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:11:29.339881   18563 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002215 seconds
	I0817 21:11:29.339966   18563 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:11:29.340072   18563 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:11:29.340128   18563 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:11:29.340278   18563 kubeadm.go:322] [mark-control-plane] Marking the node addons-418182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:11:29.340327   18563 kubeadm.go:322] [bootstrap-token] Using token: vho269.3z1zjameajwq4j1f
	I0817 21:11:29.341833   18563 out.go:204]   - Configuring RBAC rules ...
	I0817 21:11:29.341932   18563 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:11:29.342006   18563 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:11:29.342146   18563 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:11:29.342313   18563 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:11:29.342480   18563 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:11:29.342593   18563 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:11:29.342695   18563 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:11:29.342732   18563 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:11:29.342770   18563 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:11:29.342776   18563 kubeadm.go:322] 
	I0817 21:11:29.342823   18563 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:11:29.342828   18563 kubeadm.go:322] 
	I0817 21:11:29.342893   18563 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:11:29.342902   18563 kubeadm.go:322] 
	I0817 21:11:29.342922   18563 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:11:29.342975   18563 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:11:29.343017   18563 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:11:29.343023   18563 kubeadm.go:322] 
	I0817 21:11:29.343073   18563 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 21:11:29.343079   18563 kubeadm.go:322] 
	I0817 21:11:29.343119   18563 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:11:29.343125   18563 kubeadm.go:322] 
	I0817 21:11:29.343169   18563 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:11:29.343235   18563 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:11:29.343292   18563 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:11:29.343297   18563 kubeadm.go:322] 
	I0817 21:11:29.343371   18563 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:11:29.343437   18563 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:11:29.343444   18563 kubeadm.go:322] 
	I0817 21:11:29.343526   18563 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vho269.3z1zjameajwq4j1f \
	I0817 21:11:29.343615   18563 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 \
	I0817 21:11:29.343634   18563 kubeadm.go:322] 	--control-plane 
	I0817 21:11:29.343640   18563 kubeadm.go:322] 
	I0817 21:11:29.343715   18563 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:11:29.343722   18563 kubeadm.go:322] 
	I0817 21:11:29.343786   18563 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vho269.3z1zjameajwq4j1f \
	I0817 21:11:29.343874   18563 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 
	I0817 21:11:29.343883   18563 cni.go:84] Creating CNI manager for ""
	I0817 21:11:29.343889   18563 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:11:29.345408   18563 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:11:29.346696   18563 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:11:29.350270   18563 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:11:29.350284   18563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:11:29.365123   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:11:30.008351   18563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:11:30.008430   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.008445   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=addons-418182 minikube.k8s.io/updated_at=2023_08_17T21_11_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.015219   18563 ops.go:34] apiserver oom_adj: -16
	I0817 21:11:30.084059   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.149020   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:30.738138   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:31.237623   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:31.738110   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:32.238440   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:32.738538   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:33.238513   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:33.738237   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:34.238371   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:34.737996   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:35.237724   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:35.737547   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:36.237919   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:36.738378   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:37.238187   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:37.738514   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:38.237547   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:38.738175   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:39.238412   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:39.738220   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:40.237725   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:40.738458   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:41.237613   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:41.737705   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:42.237643   18563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:11:42.360151   18563 kubeadm.go:1081] duration metric: took 12.351773651s to wait for elevateKubeSystemPrivileges.
	I0817 21:11:42.360178   18563 kubeadm.go:406] StartCluster complete in 22.235527509s
	I0817 21:11:42.360195   18563 settings.go:142] acquiring lock: {Name:mkab7abc846835e928b69a2120c7e34b55f8acdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:42.360297   18563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:11:42.360815   18563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/kubeconfig: {Name:mk8d25353b4b324f395053b70676ed1b624da94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:11:42.361008   18563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:11:42.361085   18563 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0817 21:11:42.361178   18563 addons.go:69] Setting volumesnapshots=true in profile "addons-418182"
	I0817 21:11:42.361199   18563 addons.go:231] Setting addon volumesnapshots=true in "addons-418182"
	I0817 21:11:42.361199   18563 addons.go:69] Setting ingress=true in profile "addons-418182"
	I0817 21:11:42.361221   18563 addons.go:231] Setting addon ingress=true in "addons-418182"
	I0817 21:11:42.361248   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.361278   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.361283   18563 config.go:182] Loaded profile config "addons-418182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:42.361324   18563 addons.go:69] Setting default-storageclass=true in profile "addons-418182"
	I0817 21:11:42.361344   18563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-418182"
	I0817 21:11:42.361410   18563 addons.go:69] Setting cloud-spanner=true in profile "addons-418182"
	I0817 21:11:42.361441   18563 addons.go:231] Setting addon cloud-spanner=true in "addons-418182"
	I0817 21:11:42.361499   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.361645   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.361699   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.361749   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.361800   18563 addons.go:69] Setting ingress-dns=true in profile "addons-418182"
	I0817 21:11:42.361851   18563 addons.go:69] Setting metrics-server=true in profile "addons-418182"
	I0817 21:11:42.361834   18563 addons.go:69] Setting gcp-auth=true in profile "addons-418182"
	I0817 21:11:42.361869   18563 addons.go:231] Setting addon metrics-server=true in "addons-418182"
	I0817 21:11:42.361876   18563 mustload.go:65] Loading cluster: addons-418182
	I0817 21:11:42.361928   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.361940   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.361841   18563 addons.go:69] Setting helm-tiller=true in profile "addons-418182"
	I0817 21:11:42.362715   18563 config.go:182] Loaded profile config "addons-418182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:11:42.362883   18563 addons.go:231] Setting addon helm-tiller=true in "addons-418182"
	I0817 21:11:42.362931   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.361857   18563 addons.go:231] Setting addon ingress-dns=true in "addons-418182"
	I0817 21:11:42.363002   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.363494   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.363588   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.363703   18563 addons.go:69] Setting inspektor-gadget=true in profile "addons-418182"
	I0817 21:11:42.363721   18563 addons.go:231] Setting addon inspektor-gadget=true in "addons-418182"
	I0817 21:11:42.363755   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.364324   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.364439   18563 addons.go:69] Setting registry=true in profile "addons-418182"
	I0817 21:11:42.364452   18563 addons.go:231] Setting addon registry=true in "addons-418182"
	I0817 21:11:42.364493   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.365033   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.365162   18563 addons.go:69] Setting storage-provisioner=true in profile "addons-418182"
	I0817 21:11:42.365173   18563 addons.go:231] Setting addon storage-provisioner=true in "addons-418182"
	I0817 21:11:42.365211   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.365768   18563 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-418182"
	I0817 21:11:42.365812   18563 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-418182"
	I0817 21:11:42.365945   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.366062   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.366065   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.366183   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.389235   18563 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0817 21:11:42.387796   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.401945   18563 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0817 21:11:42.403708   18563 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:11:42.403729   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0817 21:11:42.403792   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.405182   18563 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0817 21:11:42.405450   18563 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0817 21:11:42.406881   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0817 21:11:42.406885   18563 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0817 21:11:42.406900   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0817 21:11:42.406965   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.407056   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.411609   18563 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0817 21:11:42.412856   18563 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0817 21:11:42.412874   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0817 21:11:42.414242   18563 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0817 21:11:42.412930   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.417216   18563 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:42.418763   18563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:11:42.418728   18563 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:42.422023   18563 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:11:42.422042   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0817 21:11:42.422101   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.420146   18563 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:11:42.422331   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:11:42.422379   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.428374   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0817 21:11:42.429763   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0817 21:11:42.431169   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0817 21:11:42.437502   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0817 21:11:42.435202   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0817 21:11:42.440265   18563 addons.go:231] Setting addon default-storageclass=true in "addons-418182"
	I0817 21:11:42.440678   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0817 21:11:42.441990   18563 out.go:177]   - Using image docker.io/registry:2.8.1
	I0817 21:11:42.442017   18563 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0817 21:11:42.442054   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.442067   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0817 21:11:42.443484   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.444010   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:42.444878   18563 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0817 21:11:42.446291   18563 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0817 21:11:42.447657   18563 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0817 21:11:42.447670   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0817 21:11:42.446306   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0817 21:11:42.447721   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.447856   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.447667   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0817 21:11:42.449219   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.457463   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0817 21:11:42.457631   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.459129   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.459758   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.460277   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0817 21:11:42.461876   18563 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0817 21:11:42.463421   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0817 21:11:42.463437   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0817 21:11:42.463486   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.463034   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:42.466912   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.473575   18563 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:11:42.473594   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:11:42.473642   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:42.473979   18563 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-418182" context rescaled to 1 replicas
	I0817 21:11:42.474010   18563 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:11:42.475574   18563 out.go:177] * Verifying Kubernetes components...
	I0817 21:11:42.478100   18563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:11:42.489882   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.491595   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.497096   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.497248   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.498534   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.505201   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:42.650879   18563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:11:42.651830   18563 node_ready.go:35] waiting up to 6m0s for node "addons-418182" to be "Ready" ...
	I0817 21:11:42.724559   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0817 21:11:42.753084   18563 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0817 21:11:42.753107   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0817 21:11:42.823374   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0817 21:11:42.825875   18563 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0817 21:11:42.825964   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0817 21:11:42.841472   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0817 21:11:42.930283   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0817 21:11:42.930356   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0817 21:11:42.944652   18563 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0817 21:11:42.944725   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0817 21:11:43.024301   18563 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0817 21:11:43.024385   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0817 21:11:43.024653   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:11:43.025231   18563 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0817 21:11:43.025287   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0817 21:11:43.043083   18563 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0817 21:11:43.043162   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0817 21:11:43.043786   18563 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0817 21:11:43.043875   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0817 21:11:43.129181   18563 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:11:43.129212   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0817 21:11:43.226366   18563 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0817 21:11:43.226458   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0817 21:11:43.236095   18563 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0817 21:11:43.236176   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0817 21:11:43.239004   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0817 21:11:43.239067   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0817 21:11:43.243559   18563 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:11:43.243628   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0817 21:11:43.323397   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:11:43.338024   18563 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0817 21:11:43.338107   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0817 21:11:43.338857   18563 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0817 21:11:43.338919   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0817 21:11:43.440734   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0817 21:11:43.441217   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0817 21:11:43.441277   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0817 21:11:43.441540   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0817 21:11:43.523465   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0817 21:11:43.524214   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0817 21:11:43.524268   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0817 21:11:43.529436   18563 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0817 21:11:43.529496   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0817 21:11:43.728650   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0817 21:11:43.728760   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0817 21:11:43.830351   18563 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:43.830379   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0817 21:11:43.935302   18563 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0817 21:11:43.935387   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0817 21:11:44.123403   18563 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0817 21:11:44.123505   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0817 21:11:44.136047   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:44.224626   18563 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0817 21:11:44.224656   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0817 21:11:44.435738   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0817 21:11:44.435827   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0817 21:11:44.638990   18563 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:11:44.639077   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0817 21:11:44.830401   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:44.924915   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0817 21:11:44.924988   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0817 21:11:45.036708   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0817 21:11:45.123630   18563 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.47269807s)
	I0817 21:11:45.123669   18563 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0817 21:11:45.423710   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0817 21:11:45.423739   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0817 21:11:45.541320   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0817 21:11:45.541347   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0817 21:11:45.831001   18563 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:11:45.831075   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0817 21:11:46.036079   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0817 21:11:46.143314   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.418714929s)
	I0817 21:11:47.243715   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:48.051438   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.228027644s)
	I0817 21:11:48.051474   18563 addons.go:467] Verifying addon ingress=true in "addons-418182"
	I0817 21:11:48.051497   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.209934205s)
	I0817 21:11:48.052912   18563 out.go:177] * Verifying ingress addon...
	I0817 21:11:48.051572   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.026873777s)
	I0817 21:11:48.051623   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.72812217s)
	I0817 21:11:48.051665   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.610838935s)
	I0817 21:11:48.051734   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.61013246s)
	I0817 21:11:48.051832   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.528264783s)
	I0817 21:11:48.051933   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.915844019s)
	I0817 21:11:48.052013   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.015265547s)
	I0817 21:11:48.054388   18563 addons.go:467] Verifying addon registry=true in "addons-418182"
	W0817 21:11:48.054442   18563 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:11:48.054469   18563 retry.go:31] will retry after 127.893271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0817 21:11:48.056121   18563 out.go:177] * Verifying registry addon...
	I0817 21:11:48.054405   18563 addons.go:467] Verifying addon metrics-server=true in "addons-418182"
	I0817 21:11:48.055059   18563 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0817 21:11:48.058540   18563 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0817 21:11:48.061502   18563 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 21:11:48.061517   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:48.061844   18563 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0817 21:11:48.061865   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:48.064619   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:48.064900   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:48.183443   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0817 21:11:48.569165   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:48.569524   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:49.069004   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:49.069391   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:49.158090   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.121907678s)
	I0817 21:11:49.158133   18563 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-418182"
	I0817 21:11:49.159896   18563 out.go:177] * Verifying csi-hostpath-driver addon...
	I0817 21:11:49.162581   18563 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0817 21:11:49.166349   18563 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 21:11:49.166372   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:49.228990   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:49.270059   18563 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0817 21:11:49.270143   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:49.287229   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:49.385850   18563 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0817 21:11:49.402485   18563 addons.go:231] Setting addon gcp-auth=true in "addons-418182"
	I0817 21:11:49.402545   18563 host.go:66] Checking if "addons-418182" exists ...
	I0817 21:11:49.402869   18563 cli_runner.go:164] Run: docker container inspect addons-418182 --format={{.State.Status}}
	I0817 21:11:49.418149   18563 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0817 21:11:49.418188   18563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418182
	I0817 21:11:49.437747   18563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/addons-418182/id_rsa Username:docker}
	I0817 21:11:49.458421   18563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.274934184s)
	I0817 21:11:49.526839   18563 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0817 21:11:49.528497   18563 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0817 21:11:49.530018   18563 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0817 21:11:49.530037   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0817 21:11:49.545626   18563 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0817 21:11:49.545646   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0817 21:11:49.560419   18563 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:11:49.560438   18563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0817 21:11:49.568689   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:49.568938   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:49.575380   18563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0817 21:11:49.732850   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:49.740106   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:50.054546   18563 addons.go:467] Verifying addon gcp-auth=true in "addons-418182"
	I0817 21:11:50.057369   18563 out.go:177] * Verifying gcp-auth addon...
	I0817 21:11:50.060113   18563 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0817 21:11:50.062819   18563 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0817 21:11:50.062835   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:50.064746   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:50.067695   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:50.068617   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:50.235221   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:50.627890   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:50.628624   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:50.629585   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:50.733734   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:51.126739   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:51.127415   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:51.127853   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:51.234135   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:51.627583   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:51.628201   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:51.630269   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:51.735116   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:51.740350   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:52.126626   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:52.127206   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:52.127548   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:52.237250   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:52.625096   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:52.626037   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:52.626739   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:52.734411   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:53.126010   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:53.126010   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:53.126591   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:53.237719   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:53.626550   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:53.626623   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:53.626689   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:53.734301   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:53.740992   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:54.068265   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:54.068878   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:54.069076   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:54.234298   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:54.569067   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:54.569713   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:54.569750   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:54.733652   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:55.124307   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:55.124851   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:55.125068   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:55.233169   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:55.568603   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:55.569555   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:55.572034   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:55.733466   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:56.069580   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:56.069770   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:56.070174   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:56.233722   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:56.239844   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:56.568346   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:56.568613   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:56.568972   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:56.733212   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:57.068347   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:57.068895   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:57.069307   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:57.233447   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:57.569167   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:57.569453   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:57.569704   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:57.733467   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:58.069053   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:58.069404   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:58.069647   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:58.233700   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:58.240070   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:11:58.570139   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:58.570660   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:58.570912   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:58.733129   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:59.068765   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:59.069073   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:59.069121   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:59.233645   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:11:59.568660   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:11:59.569154   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:11:59.569355   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:11:59.732872   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:00.068066   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:00.069853   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:00.070506   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:00.233408   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:00.568831   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:00.568883   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:00.569099   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:00.733448   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:00.739325   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:01.068686   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:01.068924   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:01.069147   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:01.233589   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:01.568742   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:01.569042   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:01.569097   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:01.733056   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:02.068331   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:02.068537   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:02.068665   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:02.232719   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:02.568172   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:02.568423   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:02.568581   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:02.733003   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:02.740033   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:03.068617   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:03.068660   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:03.068958   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:03.233080   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:03.568186   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:03.568659   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:03.568729   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:03.733232   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:04.068223   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:04.068596   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:04.068866   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:04.232813   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:04.567928   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:04.568187   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:04.568404   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:04.732566   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:05.067991   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:05.068133   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:05.068398   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:05.234052   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:05.239172   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:05.568423   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:05.568718   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:05.569201   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:05.733063   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:06.068671   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:06.068885   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:06.069218   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:06.233199   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:06.568323   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:06.568778   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:06.568901   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:06.733434   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:07.068459   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:07.068587   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:07.068861   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:07.233345   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:07.239370   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:07.569087   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:07.569393   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:07.569602   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:07.733133   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:08.068759   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:08.068833   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:08.069009   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:08.233523   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:08.568574   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:08.568979   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:08.569214   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:08.733723   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:09.070548   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:09.070565   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:09.071055   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:09.233075   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:09.240275   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:09.568585   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:09.568722   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:09.569086   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:09.733275   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:10.068353   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:10.068723   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:10.068785   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:10.233073   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:10.568809   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:10.568834   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:10.569014   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:10.732504   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:11.068860   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:11.069208   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:11.069339   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:11.232646   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:11.568828   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:11.568828   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:11.569177   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:11.733593   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:11.739800   18563 node_ready.go:58] node "addons-418182" has status "Ready":"False"
	I0817 21:12:12.067856   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:12.068189   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:12.068303   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:12.232693   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:12.570126   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:12.570272   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:12.570564   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:12.732753   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:13.067892   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:13.068542   18563 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0817 21:12:13.068553   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:13.068565   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:13.238502   18563 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0817 21:12:13.238529   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:13.242905   18563 node_ready.go:49] node "addons-418182" has status "Ready":"True"
	I0817 21:12:13.242933   18563 node_ready.go:38] duration metric: took 30.591079231s waiting for node "addons-418182" to be "Ready" ...
	I0817 21:12:13.242944   18563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:12:13.254494   18563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-sv2j7" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:13.569613   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:13.570131   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:13.570146   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:13.736221   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:14.068075   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:14.068663   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:14.068965   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:14.235582   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:14.624835   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:14.625198   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:14.629990   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:14.735706   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:14.840655   18563 pod_ready.go:92] pod "coredns-5d78c9869d-sv2j7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:14.840679   18563 pod_ready.go:81] duration metric: took 1.586159942s waiting for pod "coredns-5d78c9869d-sv2j7" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.840706   18563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.845463   18563 pod_ready.go:92] pod "etcd-addons-418182" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:14.845483   18563 pod_ready.go:81] duration metric: took 4.769492ms waiting for pod "etcd-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.845498   18563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.850425   18563 pod_ready.go:92] pod "kube-apiserver-addons-418182" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:14.850468   18563 pod_ready.go:81] duration metric: took 4.962174ms waiting for pod "kube-apiserver-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.850480   18563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.855129   18563 pod_ready.go:92] pod "kube-controller-manager-addons-418182" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:14.855152   18563 pod_ready.go:81] duration metric: took 4.663669ms waiting for pod "kube-controller-manager-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:14.855166   18563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kj5p" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:15.068653   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:15.069640   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:15.069718   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:15.234689   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:15.240627   18563 pod_ready.go:92] pod "kube-proxy-7kj5p" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:15.240652   18563 pod_ready.go:81] duration metric: took 385.476004ms waiting for pod "kube-proxy-7kj5p" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:15.240664   18563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:15.568479   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:15.569582   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:15.569713   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:15.641034   18563 pod_ready.go:92] pod "kube-scheduler-addons-418182" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:15.641056   18563 pod_ready.go:81] duration metric: took 400.384351ms waiting for pod "kube-scheduler-addons-418182" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:15.641069   18563 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:15.734394   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:16.071013   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:16.072071   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:16.072358   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:16.234831   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:16.570332   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:16.570890   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:16.570980   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:16.735110   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:17.068511   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:17.069238   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:17.069720   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:17.235290   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:17.568474   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:17.569149   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:17.569168   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:17.734125   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:17.945063   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:18.068444   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:18.068851   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:18.068953   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:18.234913   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:18.569506   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:18.569843   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:18.624496   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:18.735802   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:19.069319   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:19.069827   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:19.070022   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:19.234394   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:19.569204   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:19.571857   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:19.571884   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:19.733645   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:19.947342   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:20.128567   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:20.129506   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:20.129879   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:20.238807   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:20.629195   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:20.629713   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:20.635515   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:20.736513   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:21.125464   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:21.126790   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:21.127586   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:21.234960   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:21.568296   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:21.569498   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:21.570199   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:21.734750   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:22.069286   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:22.069584   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:22.070286   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:22.234770   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:22.445977   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:22.569051   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:22.569765   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:22.570195   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:22.734457   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:23.068396   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:23.068897   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:23.069149   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:23.235242   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:23.568420   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:23.569019   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:23.569198   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:23.737380   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:24.069191   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:24.069892   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:24.070037   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:24.234780   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:24.569044   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:24.569924   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:24.570545   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:24.735059   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:24.946143   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:25.068409   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:25.069247   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:25.069349   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:25.234762   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:25.569065   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:25.569981   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:25.570083   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:25.733950   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:26.068574   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:26.068971   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:26.069347   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:26.233493   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:26.573329   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:26.574090   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:26.576021   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:26.736282   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:26.947263   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:27.126019   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:27.126809   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:27.126883   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:27.234627   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:27.568696   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:27.569341   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:27.569390   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:27.734352   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:28.068613   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:28.069134   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:28.069205   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:28.234203   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:28.569082   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:28.569177   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:28.570125   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:28.735045   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:29.068099   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:29.068799   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:29.068981   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:29.234548   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:29.445429   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:29.568703   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:29.569399   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:29.569534   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:29.734634   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:30.067998   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:30.068981   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:30.069241   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:30.235047   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:30.568378   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:30.569750   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:30.569920   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:30.736820   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:31.068709   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:31.069531   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:31.070093   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:31.234013   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:31.445806   18563 pod_ready.go:102] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"False"
	I0817 21:12:31.567880   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:31.568603   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:31.568603   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:31.832579   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:31.945080   18563 pod_ready.go:92] pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace has status "Ready":"True"
	I0817 21:12:31.945101   18563 pod_ready.go:81] duration metric: took 16.304001391s waiting for pod "metrics-server-7746886d4f-hxxz9" in "kube-system" namespace to be "Ready" ...
	I0817 21:12:31.945119   18563 pod_ready.go:38] duration metric: took 18.702162671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:12:31.945134   18563 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:12:31.945178   18563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:12:31.957174   18563 api_server.go:72] duration metric: took 49.483133699s to wait for apiserver process to appear ...
	I0817 21:12:31.957201   18563 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:12:31.957220   18563 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 21:12:31.961884   18563 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 21:12:31.962894   18563 api_server.go:141] control plane version: v1.27.4
	I0817 21:12:31.962914   18563 api_server.go:131] duration metric: took 5.707705ms to wait for apiserver health ...
	I0817 21:12:31.962921   18563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:12:31.972212   18563 system_pods.go:59] 18 kube-system pods found
	I0817 21:12:31.972240   18563 system_pods.go:61] "coredns-5d78c9869d-sv2j7" [e7e77bb6-55fa-42b9-8f07-2fafc7b07661] Running
	I0817 21:12:31.972246   18563 system_pods.go:61] "csi-hostpath-attacher-0" [cb72ae8d-ae6d-4015-b2c1-c25aa5e74bde] Running
	I0817 21:12:31.972252   18563 system_pods.go:61] "csi-hostpath-resizer-0" [b2508254-9b1f-4ef6-a5e6-35eed827264e] Running
	I0817 21:12:31.972264   18563 system_pods.go:61] "csi-hostpathplugin-jpt2g" [5d54402b-0aa0-4640-a48f-ee954c9d601d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:12:31.972275   18563 system_pods.go:61] "etcd-addons-418182" [963dcd62-b498-4ebb-a987-e521f5e4a582] Running
	I0817 21:12:31.972286   18563 system_pods.go:61] "kindnet-5jbf4" [630239f7-ddcf-470d-bed4-c1e27e02ac3a] Running
	I0817 21:12:31.972296   18563 system_pods.go:61] "kube-apiserver-addons-418182" [2dd34e4a-1b5e-4af9-8401-d75fc90d3e7a] Running
	I0817 21:12:31.972307   18563 system_pods.go:61] "kube-controller-manager-addons-418182" [531c6d37-3181-4ee3-8c89-3cc2b1f85a8c] Running
	I0817 21:12:31.972318   18563 system_pods.go:61] "kube-ingress-dns-minikube" [099b5fb8-6322-4e5e-80fa-db1837cacc68] Running
	I0817 21:12:31.972329   18563 system_pods.go:61] "kube-proxy-7kj5p" [d7aaea71-7374-49d5-9ba9-c8a8780c35d6] Running
	I0817 21:12:31.972339   18563 system_pods.go:61] "kube-scheduler-addons-418182" [40cba87a-05be-4daa-8945-b2815c5e5fe7] Running
	I0817 21:12:31.972349   18563 system_pods.go:61] "metrics-server-7746886d4f-hxxz9" [17f8de9b-b07e-447b-86c7-b4dbe0e78707] Running
	I0817 21:12:31.972362   18563 system_pods.go:61] "registry-jvrmk" [9aab1115-3b3c-44fc-a53c-2c86008dc60c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:12:31.972378   18563 system_pods.go:61] "registry-proxy-xx5bc" [ff6fee0b-74e6-4311-a81d-8123bc66f740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:12:31.972389   18563 system_pods.go:61] "snapshot-controller-75bbb956b9-kf5vz" [4f37c3c2-6208-4957-bfb5-4775ac144285] Running
	I0817 21:12:31.972399   18563 system_pods.go:61] "snapshot-controller-75bbb956b9-lw2sl" [dc0a091d-167f-476e-be4e-b86b6195c7c6] Running
	I0817 21:12:31.972409   18563 system_pods.go:61] "storage-provisioner" [0a950d77-3a3b-46c8-aa9a-b460b49234b6] Running
	I0817 21:12:31.972419   18563 system_pods.go:61] "tiller-deploy-6847666dc-jqhh9" [973f9c57-236d-465a-af89-11b2247a28eb] Running
	I0817 21:12:31.972427   18563 system_pods.go:74] duration metric: took 9.500839ms to wait for pod list to return data ...
	I0817 21:12:31.972439   18563 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:12:31.974391   18563 default_sa.go:45] found service account: "default"
	I0817 21:12:31.974410   18563 default_sa.go:55] duration metric: took 1.962129ms for default service account to be created ...
	I0817 21:12:31.974418   18563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:12:31.983077   18563 system_pods.go:86] 18 kube-system pods found
	I0817 21:12:31.983099   18563 system_pods.go:89] "coredns-5d78c9869d-sv2j7" [e7e77bb6-55fa-42b9-8f07-2fafc7b07661] Running
	I0817 21:12:31.983104   18563 system_pods.go:89] "csi-hostpath-attacher-0" [cb72ae8d-ae6d-4015-b2c1-c25aa5e74bde] Running
	I0817 21:12:31.983109   18563 system_pods.go:89] "csi-hostpath-resizer-0" [b2508254-9b1f-4ef6-a5e6-35eed827264e] Running
	I0817 21:12:31.983117   18563 system_pods.go:89] "csi-hostpathplugin-jpt2g" [5d54402b-0aa0-4640-a48f-ee954c9d601d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0817 21:12:31.983123   18563 system_pods.go:89] "etcd-addons-418182" [963dcd62-b498-4ebb-a987-e521f5e4a582] Running
	I0817 21:12:31.983128   18563 system_pods.go:89] "kindnet-5jbf4" [630239f7-ddcf-470d-bed4-c1e27e02ac3a] Running
	I0817 21:12:31.983132   18563 system_pods.go:89] "kube-apiserver-addons-418182" [2dd34e4a-1b5e-4af9-8401-d75fc90d3e7a] Running
	I0817 21:12:31.983136   18563 system_pods.go:89] "kube-controller-manager-addons-418182" [531c6d37-3181-4ee3-8c89-3cc2b1f85a8c] Running
	I0817 21:12:31.983141   18563 system_pods.go:89] "kube-ingress-dns-minikube" [099b5fb8-6322-4e5e-80fa-db1837cacc68] Running
	I0817 21:12:31.983145   18563 system_pods.go:89] "kube-proxy-7kj5p" [d7aaea71-7374-49d5-9ba9-c8a8780c35d6] Running
	I0817 21:12:31.983149   18563 system_pods.go:89] "kube-scheduler-addons-418182" [40cba87a-05be-4daa-8945-b2815c5e5fe7] Running
	I0817 21:12:31.983154   18563 system_pods.go:89] "metrics-server-7746886d4f-hxxz9" [17f8de9b-b07e-447b-86c7-b4dbe0e78707] Running
	I0817 21:12:31.983159   18563 system_pods.go:89] "registry-jvrmk" [9aab1115-3b3c-44fc-a53c-2c86008dc60c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0817 21:12:31.983170   18563 system_pods.go:89] "registry-proxy-xx5bc" [ff6fee0b-74e6-4311-a81d-8123bc66f740] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0817 21:12:31.983175   18563 system_pods.go:89] "snapshot-controller-75bbb956b9-kf5vz" [4f37c3c2-6208-4957-bfb5-4775ac144285] Running
	I0817 21:12:31.983180   18563 system_pods.go:89] "snapshot-controller-75bbb956b9-lw2sl" [dc0a091d-167f-476e-be4e-b86b6195c7c6] Running
	I0817 21:12:31.983184   18563 system_pods.go:89] "storage-provisioner" [0a950d77-3a3b-46c8-aa9a-b460b49234b6] Running
	I0817 21:12:31.983188   18563 system_pods.go:89] "tiller-deploy-6847666dc-jqhh9" [973f9c57-236d-465a-af89-11b2247a28eb] Running
	I0817 21:12:31.983194   18563 system_pods.go:126] duration metric: took 8.771106ms to wait for k8s-apps to be running ...
	I0817 21:12:31.983200   18563 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:12:31.983233   18563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:12:31.995713   18563 system_svc.go:56] duration metric: took 12.505623ms WaitForService to wait for kubelet.
	I0817 21:12:31.995739   18563 kubeadm.go:581] duration metric: took 49.521701524s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:12:31.995765   18563 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:12:31.998371   18563 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0817 21:12:31.998390   18563 node_conditions.go:123] node cpu capacity is 8
	I0817 21:12:31.998400   18563 node_conditions.go:105] duration metric: took 2.630224ms to run NodePressure ...
	I0817 21:12:31.998409   18563 start.go:228] waiting for startup goroutines ...
	I0817 21:12:32.069784   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:32.069958   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:32.070340   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:32.234609   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:32.568625   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:32.569649   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:32.569724   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:32.733874   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:33.068429   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:33.069096   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:33.069210   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:33.235294   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:33.568820   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:33.569590   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:33.569780   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:33.734426   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:34.068326   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:34.069016   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:34.069108   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:34.234162   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:34.568735   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:34.569318   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:34.569640   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:34.734230   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:35.068616   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:35.069305   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:35.069347   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:35.233337   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:35.569516   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:35.569532   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:35.569568   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:35.735306   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:36.071192   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:36.071702   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:36.072540   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:36.235342   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:36.568759   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:36.569527   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:36.569849   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:36.735234   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:37.068612   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:37.069257   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:37.069630   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:37.234988   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:37.569322   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:37.569763   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:37.569867   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:37.734873   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:38.068528   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:38.069076   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:38.069189   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:38.234556   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:38.568523   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:38.568984   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:38.569077   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0817 21:12:38.734452   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:39.068835   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:39.069366   18563 kapi.go:107] duration metric: took 51.010825995s to wait for kubernetes.io/minikube-addons=registry ...
	I0817 21:12:39.069517   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:39.233822   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:39.568317   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:39.568931   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:39.734716   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:40.069011   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:40.069380   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:40.235034   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:40.568209   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0817 21:12:40.568612   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:40.735929   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:41.068957   18563 kapi.go:107] duration metric: took 51.008841059s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0817 21:12:41.070971   18563 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-418182 cluster.
	I0817 21:12:41.069552   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:41.072533   18563 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0817 21:12:41.074077   18563 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0817 21:12:41.234483   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:41.569886   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:41.733993   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:42.125630   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:42.235180   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:42.569978   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:42.734821   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:43.069206   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:43.235233   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:43.569217   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:43.733937   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:44.068859   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:44.234207   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:44.569677   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:44.734755   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:45.068712   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:45.233855   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:45.569143   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:45.734364   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:46.069491   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:46.246729   18563 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0817 21:12:46.568794   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:46.735339   18563 kapi.go:107] duration metric: took 57.572754504s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0817 21:12:47.129841   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:47.569203   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:48.126072   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:48.626898   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:49.069440   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:49.569995   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:50.069523   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:50.568444   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:51.070404   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:51.570063   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:52.069737   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:52.569040   18563 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0817 21:12:53.179370   18563 kapi.go:107] duration metric: took 1m5.124308064s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0817 21:12:53.188909   18563 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, inspektor-gadget, helm-tiller, storage-provisioner, default-storageclass, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0817 21:12:53.210227   18563 addons.go:502] enable addons completed in 1m10.849123479s: enabled=[ingress-dns cloud-spanner inspektor-gadget helm-tiller storage-provisioner default-storageclass metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0817 21:12:53.210273   18563 start.go:233] waiting for cluster config update ...
	I0817 21:12:53.210291   18563 start.go:242] writing updated cluster config ...
	I0817 21:12:53.210569   18563 ssh_runner.go:195] Run: rm -f paused
	I0817 21:12:53.268388   18563 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:12:53.348080   18563 out.go:177] * Done! kubectl is now configured to use "addons-418182" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.073263517Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.151049603Z" level=info msg="Created container a1502105da6edaf05a7ab54362a8187fd5ac4d17c9ce185b956fad9bf0ecff6c: default/hello-world-app-65bdb79f98-vzstg/hello-world-app" id=ee442bab-0a25-4449-8b24-8e984813e55e name=/runtime.v1.RuntimeService/CreateContainer
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.151551211Z" level=info msg="Starting container: a1502105da6edaf05a7ab54362a8187fd5ac4d17c9ce185b956fad9bf0ecff6c" id=1150cdbc-771e-44ce-b9c6-1ad8a7d7ba80 name=/runtime.v1.RuntimeService/StartContainer
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.159394572Z" level=info msg="Started container" PID=9576 containerID=a1502105da6edaf05a7ab54362a8187fd5ac4d17c9ce185b956fad9bf0ecff6c description=default/hello-world-app-65bdb79f98-vzstg/hello-world-app id=1150cdbc-771e-44ce-b9c6-1ad8a7d7ba80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=456ff2fcac891469527fc35d838969df8e526e5cb9f296390082c693f2ee2227
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.249734566Z" level=info msg="Removing container: e73a339dd5f797be5172c55cfcb43d093c4c36ac28db625afc81f37f262521fb" id=2c7cfe08-16c9-4373-81fc-3f0e783bccf6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.265889541Z" level=info msg="Removed container e73a339dd5f797be5172c55cfcb43d093c4c36ac28db625afc81f37f262521fb: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=2c7cfe08-16c9-4373-81fc-3f0e783bccf6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.378410840Z" level=info msg="Stopping pod sandbox: 6cd3d45cab10fff5520422d7b67acbcf6763f9c124e16123e5037c3ba6e40b08" id=f7763082-9e49-492b-89ac-7258b02c8c17 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.378464830Z" level=info msg="Stopped pod sandbox (already stopped): 6cd3d45cab10fff5520422d7b67acbcf6763f9c124e16123e5037c3ba6e40b08" id=f7763082-9e49-492b-89ac-7258b02c8c17 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.378738752Z" level=info msg="Removing pod sandbox: 6cd3d45cab10fff5520422d7b67acbcf6763f9c124e16123e5037c3ba6e40b08" id=eb2171fe-0011-4752-a1d7-bed90ff82292 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.386584803Z" level=info msg="Removed pod sandbox: 6cd3d45cab10fff5520422d7b67acbcf6763f9c124e16123e5037c3ba6e40b08" id=eb2171fe-0011-4752-a1d7-bed90ff82292 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 17 21:15:29 addons-418182 crio[953]: time="2023-08-17 21:15:29.848799337Z" level=info msg="Stopping container: 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f (timeout: 1s)" id=3b2d069a-5856-4676-a17b-c94ff463600f name=/runtime.v1.RuntimeService/StopContainer
	Aug 17 21:15:30 addons-418182 crio[953]: time="2023-08-17 21:15:30.923545430Z" level=warning msg="Stopping container 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=3b2d069a-5856-4676-a17b-c94ff463600f name=/runtime.v1.RuntimeService/StopContainer
	Aug 17 21:15:30 addons-418182 conmon[6118]: conmon 4328f70cbc51fa9ba0b8 <ninfo>: container 6130 exited with status 137
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.068230055Z" level=info msg="Stopped container 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f: ingress-nginx/ingress-nginx-controller-7799c6795f-tnrzt/controller" id=3b2d069a-5856-4676-a17b-c94ff463600f name=/runtime.v1.RuntimeService/StopContainer
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.068692995Z" level=info msg="Stopping pod sandbox: d3cf3ac49ff30287bda77d7d76026765377f7b07e04e192f680b41fc60494b8c" id=e042f72f-dfc1-42f7-8da1-cc2f335a1b88 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.071667833Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-U52WHOM3MKZRL5UX - [0:0]\n:KUBE-HP-CD5BKRQNXNU2LVUZ - [0:0]\n-X KUBE-HP-U52WHOM3MKZRL5UX\n-X KUBE-HP-CD5BKRQNXNU2LVUZ\nCOMMIT\n"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.072955007Z" level=info msg="Closing host port tcp:80"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.072992115Z" level=info msg="Closing host port tcp:443"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.074270686Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.074289216Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.074453663Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-tnrzt Namespace:ingress-nginx ID:d3cf3ac49ff30287bda77d7d76026765377f7b07e04e192f680b41fc60494b8c UID:5ceb8f79-b29b-455c-87d9-57d8c2c4b70c NetNS:/var/run/netns/54fd5029-29d0-4a97-8cec-0381f5db62d7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.074609993Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-tnrzt from CNI network \"kindnet\" (type=ptp)"
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.111128717Z" level=info msg="Stopped pod sandbox: d3cf3ac49ff30287bda77d7d76026765377f7b07e04e192f680b41fc60494b8c" id=e042f72f-dfc1-42f7-8da1-cc2f335a1b88 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.256047603Z" level=info msg="Removing container: 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f" id=d7d0e6b9-8a55-43be-86ed-c28147196f46 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 17 21:15:31 addons-418182 crio[953]: time="2023-08-17 21:15:31.270523692Z" level=info msg="Removed container 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f: ingress-nginx/ingress-nginx-controller-7799c6795f-tnrzt/controller" id=d7d0e6b9-8a55-43be-86ed-c28147196f46 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a1502105da6ed       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   456ff2fcac891       hello-world-app-65bdb79f98-vzstg
	5a0ff0a98b4d4       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a                              2 minutes ago       Running             nginx                     0                   8861c6140a1ae       nginx
	19188a3cbc683       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   dcfc1feef92db       headlamp-5c78f74d8d-lng4q
	64b3772f5d667       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   bb5087b9af544       gcp-auth-58478865f7-6k44m
	2d4efa03ceac0       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             3 minutes ago       Exited              patch                     2                   9d48433638a78       ingress-nginx-admission-patch-dx8g5
	e596ba063a9a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   dcdcdaae25bfa       ingress-nginx-admission-create-chbrk
	ef32ec35cd1fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   ed26ff4d8da96       storage-provisioner
	2ed826ef91938       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   308a3bc0099c0       coredns-5d78c9869d-sv2j7
	dc703856fb95e       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                                             3 minutes ago       Running             kube-proxy                0                   bd82067a4e6cc       kube-proxy-7kj5p
	4dc78819e051e       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   a4a5b7b06d4ae       kindnet-5jbf4
	f4edfc9fcecb9       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                                             4 minutes ago       Running             kube-scheduler            0                   53c6ce18ffa29       kube-scheduler-addons-418182
	5763ae9bb43b8       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                                             4 minutes ago       Running             kube-apiserver            0                   0774b39e6d7d7       kube-apiserver-addons-418182
	4f82f57d93614       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   3890854fb7d1e       etcd-addons-418182
	f3aea087e4d64       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                                             4 minutes ago       Running             kube-controller-manager   0                   631ca9c57d94b       kube-controller-manager-addons-418182
	
	* 
	* ==> coredns [2ed826ef91938d8ab5bf7f00c1d54076cd28839bb7fb601e5a2c7c5217bfce15] <==
	* [INFO] 10.244.0.16:46365 - 25403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099538s
	[INFO] 10.244.0.16:34792 - 26142 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003985775s
	[INFO] 10.244.0.16:34792 - 13852 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004438998s
	[INFO] 10.244.0.16:48214 - 48521 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004174292s
	[INFO] 10.244.0.16:48214 - 45962 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004536016s
	[INFO] 10.244.0.16:54210 - 62120 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004032818s
	[INFO] 10.244.0.16:54210 - 17847 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00438071s
	[INFO] 10.244.0.16:37131 - 38186 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079862s
	[INFO] 10.244.0.16:37131 - 41511 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106004s
	[INFO] 10.244.0.17:54745 - 16875 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018628s
	[INFO] 10.244.0.17:43760 - 48262 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000258339s
	[INFO] 10.244.0.17:47679 - 64547 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124736s
	[INFO] 10.244.0.17:45229 - 14279 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189655s
	[INFO] 10.244.0.17:47603 - 30209 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009512s
	[INFO] 10.244.0.17:56951 - 65430 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098589s
	[INFO] 10.244.0.17:55395 - 19772 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004079259s
	[INFO] 10.244.0.17:33539 - 61165 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004255622s
	[INFO] 10.244.0.17:59429 - 64380 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004750274s
	[INFO] 10.244.0.17:46634 - 13464 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005048143s
	[INFO] 10.244.0.17:45834 - 26662 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004770467s
	[INFO] 10.244.0.17:51693 - 51390 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00543827s
	[INFO] 10.244.0.17:46770 - 32421 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000700964s
	[INFO] 10.244.0.17:53015 - 786 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000717274s
	[INFO] 10.244.0.21:48908 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127823s
	[INFO] 10.244.0.21:58519 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158347s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-418182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-418182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=addons-418182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_11_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-418182
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:11:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-418182
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:15:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:15:34 +0000   Thu, 17 Aug 2023 21:11:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:15:34 +0000   Thu, 17 Aug 2023 21:11:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:15:34 +0000   Thu, 17 Aug 2023 21:11:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:15:34 +0000   Thu, 17 Aug 2023 21:12:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-418182
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 743b1ec95e5e4825a87569a2a3f4d747
	  System UUID:                2a8b24b0-cb60-4209-9690-46ccdabd5772
	  Boot ID:                    8d1de0dd-e970-4922-97d1-4b473b3fd1c5
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-vzstg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-58478865f7-6k44m                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  headlamp                    headlamp-5c78f74d8d-lng4q                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5d78c9869d-sv2j7                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m56s
	  kube-system                 etcd-addons-418182                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m8s
	  kube-system                 kindnet-5jbf4                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m56s
	  kube-system                 kube-apiserver-addons-418182             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-addons-418182    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-7kj5p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-scheduler-addons-418182             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m54s  kube-proxy       
	  Normal  Starting                 4m8s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s   kubelet          Node addons-418182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s   kubelet          Node addons-418182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s   kubelet          Node addons-418182 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m56s  node-controller  Node addons-418182 event: Registered Node addons-418182 in Controller
	  Normal  NodeReady                3m25s  kubelet          Node addons-418182 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007502] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003012] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000690] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000628] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000625] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000631] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000665] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.390305] kauditd_printk_skb: 36 callbacks suppressed
	[Aug17 21:13] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[  +1.016267] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[  +2.015804] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[  +4.031574] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[  +8.195212] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[ +16.122440] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	[Aug17 21:14] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: be 8e 7c 04 40 95 fa e1 87 57 a0 89 08 00
	
	* 
	* ==> etcd [4f82f57d936142a543faa613bd12385012d8bc66a6d934986ebfcd192cbe6fd8] <==
	* {"level":"info","ts":"2023-08-17T21:11:24.539Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-17T21:11:45.028Z","caller":"traceutil/trace.go:171","msg":"trace[927993870] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"100.375505ms","start":"2023-08-17T21:11:44.928Z","end":"2023-08-17T21:11:45.028Z","steps":["trace[927993870] 'compare'  (duration: 11.17797ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:46.022Z","caller":"traceutil/trace.go:171","msg":"trace[355117939] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"177.749448ms","start":"2023-08-17T21:11:45.844Z","end":"2023-08-17T21:11:46.022Z","steps":["trace[355117939] 'process raft request'  (duration: 177.652699ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:47.029Z","caller":"traceutil/trace.go:171","msg":"trace[1532106147] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"107.084746ms","start":"2023-08-17T21:11:46.922Z","end":"2023-08-17T21:11:47.029Z","steps":["trace[1532106147] 'process raft request'  (duration: 13.287946ms)","trace[1532106147] 'compare'  (duration: 93.626497ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:11:47.029Z","caller":"traceutil/trace.go:171","msg":"trace[1624098466] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"103.954417ms","start":"2023-08-17T21:11:46.925Z","end":"2023-08-17T21:11:47.029Z","steps":["trace[1624098466] 'process raft request'  (duration: 103.776736ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:47.029Z","caller":"traceutil/trace.go:171","msg":"trace[1467606566] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"107.124169ms","start":"2023-08-17T21:11:46.922Z","end":"2023-08-17T21:11:47.029Z","steps":["trace[1467606566] 'process raft request'  (duration: 106.964155ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:47.030Z","caller":"traceutil/trace.go:171","msg":"trace[388328488] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"104.083719ms","start":"2023-08-17T21:11:46.926Z","end":"2023-08-17T21:11:47.030Z","steps":["trace[388328488] 'process raft request'  (duration: 103.5605ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:47.030Z","caller":"traceutil/trace.go:171","msg":"trace[1210254084] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"100.534612ms","start":"2023-08-17T21:11:46.930Z","end":"2023-08-17T21:11:47.030Z","steps":["trace[1210254084] 'process raft request'  (duration: 99.888293ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:11:47.030Z","caller":"traceutil/trace.go:171","msg":"trace[1436000630] linearizableReadLoop","detail":"{readStateIndex:472; appliedIndex:467; }","duration":"100.370171ms","start":"2023-08-17T21:11:46.930Z","end":"2023-08-17T21:11:47.030Z","steps":["trace[1436000630] 'read index received'  (duration: 5.575943ms)","trace[1436000630] 'applied index is now lower than readState.Index'  (duration: 94.793057ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:11:47.030Z","caller":"traceutil/trace.go:171","msg":"trace[66356265] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"100.411462ms","start":"2023-08-17T21:11:46.930Z","end":"2023-08-17T21:11:47.030Z","steps":["trace[66356265] 'process raft request'  (duration: 99.868726ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:11:47.031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.649022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:1 size:2772"}
	{"level":"info","ts":"2023-08-17T21:11:47.031Z","caller":"traceutil/trace.go:171","msg":"trace[1771112888] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:1; response_revision:464; }","duration":"100.768498ms","start":"2023-08-17T21:11:46.930Z","end":"2023-08-17T21:11:47.031Z","steps":["trace[1771112888] 'agreement among raft nodes before linearized reading'  (duration: 100.588439ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-17T21:11:47.031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.387165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/tiller-deploy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-17T21:11:47.031Z","caller":"traceutil/trace.go:171","msg":"trace[243789372] range","detail":"{range_begin:/registry/services/specs/kube-system/tiller-deploy; range_end:; response_count:0; response_revision:464; }","duration":"101.49411ms","start":"2023-08-17T21:11:46.930Z","end":"2023-08-17T21:11:47.031Z","steps":["trace[243789372] 'agreement among raft nodes before linearized reading'  (duration: 100.40514ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:12:31.828Z","caller":"traceutil/trace.go:171","msg":"trace[160862007] transaction","detail":"{read_only:false; response_revision:950; number_of_response:1; }","duration":"165.986327ms","start":"2023-08-17T21:12:31.662Z","end":"2023-08-17T21:12:31.828Z","steps":["trace[160862007] 'process raft request'  (duration: 126.73549ms)","trace[160862007] 'compare'  (duration: 39.078557ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:12:31.828Z","caller":"traceutil/trace.go:171","msg":"trace[2030302760] transaction","detail":"{read_only:false; response_revision:951; number_of_response:1; }","duration":"166.037787ms","start":"2023-08-17T21:12:31.662Z","end":"2023-08-17T21:12:31.828Z","steps":["trace[2030302760] 'process raft request'  (duration: 165.830844ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:12:31.828Z","caller":"traceutil/trace.go:171","msg":"trace[1939090304] transaction","detail":"{read_only:false; response_revision:952; number_of_response:1; }","duration":"138.147791ms","start":"2023-08-17T21:12:31.690Z","end":"2023-08-17T21:12:31.828Z","steps":["trace[1939090304] 'process raft request'  (duration: 137.930751ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:12:31.829Z","caller":"traceutil/trace.go:171","msg":"trace[1684436374] transaction","detail":"{read_only:false; number_of_response:1; response_revision:952; }","duration":"137.288195ms","start":"2023-08-17T21:12:31.691Z","end":"2023-08-17T21:12:31.829Z","steps":["trace[1684436374] 'process raft request'  (duration: 136.977462ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:12:53.172Z","caller":"traceutil/trace.go:171","msg":"trace[731500130] linearizableReadLoop","detail":"{readStateIndex:1111; appliedIndex:1110; }","duration":"105.92228ms","start":"2023-08-17T21:12:53.067Z","end":"2023-08-17T21:12:53.172Z","steps":["trace[731500130] 'read index received'  (duration: 22.104022ms)","trace[731500130] 'applied index is now lower than readState.Index'  (duration: 83.817616ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:12:53.173Z","caller":"traceutil/trace.go:171","msg":"trace[610620133] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1077; }","duration":"137.672703ms","start":"2023-08-17T21:12:53.035Z","end":"2023-08-17T21:12:53.173Z","steps":["trace[610620133] 'process raft request'  (duration: 53.82015ms)","trace[610620133] 'compare'  (duration: 83.664259ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-17T21:12:53.173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.117062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14859"}
	{"level":"info","ts":"2023-08-17T21:12:53.173Z","caller":"traceutil/trace.go:171","msg":"trace[1704739747] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1077; }","duration":"106.172169ms","start":"2023-08-17T21:12:53.067Z","end":"2023-08-17T21:12:53.173Z","steps":["trace[1704739747] 'agreement among raft nodes before linearized reading'  (duration: 106.007713ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-17T21:12:59.090Z","caller":"traceutil/trace.go:171","msg":"trace[406613559] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1119; }","duration":"106.1472ms","start":"2023-08-17T21:12:58.984Z","end":"2023-08-17T21:12:59.090Z","steps":["trace[406613559] 'process raft request'  (duration: 63.580077ms)","trace[406613559] 'compare'  (duration: 42.425272ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:13:15.271Z","caller":"traceutil/trace.go:171","msg":"trace[257549832] transaction","detail":"{read_only:false; response_revision:1301; number_of_response:1; }","duration":"129.113075ms","start":"2023-08-17T21:13:15.142Z","end":"2023-08-17T21:13:15.271Z","steps":["trace[257549832] 'process raft request'  (duration: 61.877141ms)","trace[257549832] 'compare'  (duration: 67.112308ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-17T21:13:31.615Z","caller":"traceutil/trace.go:171","msg":"trace[1370069280] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"142.263986ms","start":"2023-08-17T21:13:31.473Z","end":"2023-08-17T21:13:31.615Z","steps":["trace[1370069280] 'process raft request'  (duration: 83.933589ms)","trace[1370069280] 'compare'  (duration: 58.167016ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [64b3772f5d667350c72d4e1f24bc9d1255d8422c69e7cc2df18add3e76539509] <==
	* 2023/08/17 21:12:40 GCP Auth Webhook started!
	2023/08/17 21:12:54 Ready to marshal response ...
	2023/08/17 21:12:54 Ready to write response ...
	2023/08/17 21:12:54 Ready to marshal response ...
	2023/08/17 21:12:54 Ready to write response ...
	2023/08/17 21:12:54 Ready to marshal response ...
	2023/08/17 21:12:54 Ready to write response ...
	2023/08/17 21:12:58 Ready to marshal response ...
	2023/08/17 21:12:58 Ready to write response ...
	2023/08/17 21:13:03 Ready to marshal response ...
	2023/08/17 21:13:03 Ready to write response ...
	2023/08/17 21:13:07 Ready to marshal response ...
	2023/08/17 21:13:07 Ready to write response ...
	2023/08/17 21:13:09 Ready to marshal response ...
	2023/08/17 21:13:09 Ready to write response ...
	2023/08/17 21:13:35 Ready to marshal response ...
	2023/08/17 21:13:35 Ready to write response ...
	2023/08/17 21:15:27 Ready to marshal response ...
	2023/08/17 21:15:27 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:15:38 up 58 min,  0 users,  load average: 0.16, 0.51, 0.27
	Linux addons-418182 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [4dc78819e051e216a0d0376dc02699ced7cafd90dc64a335aa234ae35568b145] <==
	* I0817 21:13:32.845116       1 main.go:227] handling current node
	I0817 21:13:42.856911       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:13:42.856934       1 main.go:227] handling current node
	I0817 21:13:52.870253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:13:52.870277       1 main.go:227] handling current node
	I0817 21:14:02.873699       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:02.873730       1 main.go:227] handling current node
	I0817 21:14:12.877193       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:12.877215       1 main.go:227] handling current node
	I0817 21:14:22.889225       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:22.889249       1 main.go:227] handling current node
	I0817 21:14:32.893030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:32.893054       1 main.go:227] handling current node
	I0817 21:14:42.900924       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:42.900951       1 main.go:227] handling current node
	I0817 21:14:52.903872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:14:52.903895       1 main.go:227] handling current node
	I0817 21:15:02.915988       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:02.916010       1 main.go:227] handling current node
	I0817 21:15:12.919919       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:12.919943       1 main.go:227] handling current node
	I0817 21:15:22.923054       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:22.923079       1 main.go:227] handling current node
	I0817 21:15:32.926236       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:15:32.926258       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5763ae9bb43b84bfe9d5f6fad8e76a7104ea74ecb6e21b044b8323dcb609c39f] <==
	* I0817 21:13:51.624215       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.624274       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.630294       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.630345       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.634272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.634378       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.641073       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.641313       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.654575       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.654722       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.733047       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.733169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.741474       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.741516       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0817 21:13:51.743086       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0817 21:13:51.743189       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0817 21:13:52.642200       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0817 21:13:52.741730       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0817 21:13:52.753412       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0817 21:14:32.872938       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0817 21:14:32.872962       1 handler_proxy.go:100] no RequestInfo found in the context
	E0817 21:14:32.873002       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0817 21:14:32.873015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0817 21:15:27.763040       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.97.174.22]
	
	* 
	* ==> kube-controller-manager [f3aea087e4d640c849a5d6d8f68f3c0018424cbc98b96e9356f94d2adda28f61] <==
	* E0817 21:14:11.085372       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:14:12.240091       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0817 21:14:12.240123       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:14:12.570610       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0817 21:14:12.570648       1 shared_informer.go:318] Caches are synced for garbage collector
	W0817 21:14:24.663569       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:24.663599       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:29.255122       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:29.255149       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:29.338356       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:29.338385       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:31.827344       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:31.827379       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:14:57.947360       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:14:57.947392       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:11.458846       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:11.458876       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:14.310514       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:14.310545       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0817 21:15:22.899547       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0817 21:15:22.899576       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0817 21:15:27.609502       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0817 21:15:27.621708       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-vzstg"
	I0817 21:15:29.838785       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0817 21:15:29.842851       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [dc703856fb95e2a26d5cd4c9fed94310986a9d3026e30b4a880beb2e9a74a6ca] <==
	* I0817 21:11:42.456755       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0817 21:11:42.461845       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0817 21:11:42.461923       1 server_others.go:554] "Using iptables proxy"
	I0817 21:11:42.942114       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:11:42.942246       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0817 21:11:42.942320       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0817 21:11:42.942361       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0817 21:11:42.942480       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:11:42.943137       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:11:42.943464       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:11:43.044420       1 config.go:188] "Starting service config controller"
	I0817 21:11:43.044453       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:11:43.044486       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:11:43.044491       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:11:43.045046       1 config.go:315] "Starting node config controller"
	I0817 21:11:43.045056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:11:43.235792       1 shared_informer.go:318] Caches are synced for node config
	I0817 21:11:43.236067       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:11:43.236139       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [f4edfc9fcecb9dbf27326008e2fdac0d268605578ed632db5165bd50c6b96c55] <==
	* W0817 21:11:26.436005       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:11:26.436024       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 21:11:26.436075       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:26.436100       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:26.436154       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:11:26.436172       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 21:11:26.436214       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:11:26.436231       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0817 21:11:26.436214       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:11:26.436267       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 21:11:26.436334       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:26.436359       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:26.436955       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:11:26.437075       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 21:11:26.437051       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 21:11:26.437141       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 21:11:27.266541       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:11:27.266576       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 21:11:27.283011       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:11:27.283040       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0817 21:11:27.398707       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:11:27.398742       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0817 21:11:27.465658       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:11:27.465689       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0817 21:11:29.527452       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.299835    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bae89384099a986091a3a11a313981766a3dd677c1ff74543fc57a35030b8712/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bae89384099a986091a3a11a313981766a3dd677c1ff74543fc57a35030b8712/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.300990    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e766f9a1ccf75ffd28aec3ad9068c7b76268e76d0aa59537c047d2f821a994fb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e766f9a1ccf75ffd28aec3ad9068c7b76268e76d0aa59537c047d2f821a994fb/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.308894    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5d817322481321548f32a17980e3f0a7b11752043891b82501fa76e414a8097a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5d817322481321548f32a17980e3f0a7b11752043891b82501fa76e414a8097a/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.323900    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9c2809be7d0f97410f140b270031966e855abc7aa6706bb62d905368e6cd087/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9c2809be7d0f97410f140b270031966e855abc7aa6706bb62d905368e6cd087/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.326061    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0fb519fe687dbccda526cf0bcc40bfbdfb8409906e406fb37c1ded08fe2ea6d5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0fb519fe687dbccda526cf0bcc40bfbdfb8409906e406fb37c1ded08fe2ea6d5/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.328066    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a9c2809be7d0f97410f140b270031966e855abc7aa6706bb62d905368e6cd087/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a9c2809be7d0f97410f140b270031966e855abc7aa6706bb62d905368e6cd087/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.333464    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/390fb5700f2451c0361c84638cffe58c8450fc06ac89160628d7bdac42ed9746/diff" to get inode usage: stat /var/lib/containers/storage/overlay/390fb5700f2451c0361c84638cffe58c8450fc06ac89160628d7bdac42ed9746/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.338773    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d31ea691f6353b9b22487b9c6ab09c4b76c8069a6f08db11fa30408f70a02aca/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d31ea691f6353b9b22487b9c6ab09c4b76c8069a6f08db11fa30408f70a02aca/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.349365    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e766f9a1ccf75ffd28aec3ad9068c7b76268e76d0aa59537c047d2f821a994fb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e766f9a1ccf75ffd28aec3ad9068c7b76268e76d0aa59537c047d2f821a994fb/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.351522    1566 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0fb519fe687dbccda526cf0bcc40bfbdfb8409906e406fb37c1ded08fe2ea6d5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0fb519fe687dbccda526cf0bcc40bfbdfb8409906e406fb37c1ded08fe2ea6d5/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:15:29 addons-418182 kubelet[1566]: E0817 21:15:29.850366    1566 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-tnrzt.177c482fbc8ba738", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-tnrzt", UID:"5ceb8f79-b29b-455c-87d9-57d8c2c4b70c", APIVersion:"v1", ResourceVersion:"738", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-418182"}, FirstTimestamp:time.Date(2023, time.August, 17, 21, 15, 29, 848395576, time.Local), LastTimestamp:time.Date(2023, time.August, 17, 21, 15, 29, 848395576, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-tnrzt.177c482fbc8ba738" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.224642    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=067cb7c0-5197-4bde-9067-69ce68dbb39a path="/var/lib/kubelet/pods/067cb7c0-5197-4bde-9067-69ce68dbb39a/volumes"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.224953    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=099b5fb8-6322-4e5e-80fa-db1837cacc68 path="/var/lib/kubelet/pods/099b5fb8-6322-4e5e-80fa-db1837cacc68/volumes"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.225239    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4344082d-cca8-4199-adc8-8234bc594480 path="/var/lib/kubelet/pods/4344082d-cca8-4199-adc8-8234bc594480/volumes"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.242594    1566 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drct6\" (UniqueName: \"kubernetes.io/projected/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-kube-api-access-drct6\") pod \"5ceb8f79-b29b-455c-87d9-57d8c2c4b70c\" (UID: \"5ceb8f79-b29b-455c-87d9-57d8c2c4b70c\") "
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.242644    1566 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-webhook-cert\") pod \"5ceb8f79-b29b-455c-87d9-57d8c2c4b70c\" (UID: \"5ceb8f79-b29b-455c-87d9-57d8c2c4b70c\") "
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.244382    1566 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-kube-api-access-drct6" (OuterVolumeSpecName: "kube-api-access-drct6") pod "5ceb8f79-b29b-455c-87d9-57d8c2c4b70c" (UID: "5ceb8f79-b29b-455c-87d9-57d8c2c4b70c"). InnerVolumeSpecName "kube-api-access-drct6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.244480    1566 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5ceb8f79-b29b-455c-87d9-57d8c2c4b70c" (UID: "5ceb8f79-b29b-455c-87d9-57d8c2c4b70c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.255139    1566 scope.go:115] "RemoveContainer" containerID="4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.270742    1566 scope.go:115] "RemoveContainer" containerID="4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: E0817 21:15:31.271111    1566 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f\": container with ID starting with 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f not found: ID does not exist" containerID="4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.271146    1566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f} err="failed to get container status \"4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f\": rpc error: code = NotFound desc = could not find container \"4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f\": container with ID starting with 4328f70cbc51fa9ba0b80eb9595cbc2359ef6b8cc71ed294588007751fd5827f not found: ID does not exist"
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.343421    1566 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-webhook-cert\") on node \"addons-418182\" DevicePath \"\""
	Aug 17 21:15:31 addons-418182 kubelet[1566]: I0817 21:15:31.343458    1566 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-drct6\" (UniqueName: \"kubernetes.io/projected/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c-kube-api-access-drct6\") on node \"addons-418182\" DevicePath \"\""
	Aug 17 21:15:33 addons-418182 kubelet[1566]: I0817 21:15:33.224607    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5ceb8f79-b29b-455c-87d9-57d8c2c4b70c path="/var/lib/kubelet/pods/5ceb8f79-b29b-455c-87d9-57d8c2c4b70c/volumes"
	
	* 
	* ==> storage-provisioner [ef32ec35cd1fe6c55f5657cf3e2624de3e9c60b981c6f3fb2f00b7195056b737] <==
	* I0817 21:12:13.940279       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:12:13.950249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:12:13.950304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:12:13.957862       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:12:13.958083       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"743cbe29-6a19-4a80-8afd-7e644d4dce53", APIVersion:"v1", ResourceVersion:"820", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-418182_fd54fb8b-ce5c-461c-9bdd-a110b793795e became leader
	I0817 21:12:13.958094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-418182_fd54fb8b-ce5c-461c-9bdd-a110b793795e!
	I0817 21:12:14.058926       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-418182_fd54fb8b-ce5c-461c-9bdd-a110b793795e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-418182 -n addons-418182
helpers_test.go:261: (dbg) Run:  kubectl --context addons-418182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image rm gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image rm gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr: (2.216997178s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
functional_test.go:402: expected "gcr.io/google-containers/addon-resizer:functional-702251" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-997484 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-997484 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.947385474s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-997484 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-997484 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [98713ea2-b0c3-44a7-952f-d2e59f1da46c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [98713ea2-b0c3-44a7-952f-d2e59f1da46c] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006535817s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0817 21:22:53.379691   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:23:21.063965   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-997484 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.073593355s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-997484 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0817 21:24:10.367193   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.372467   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.382731   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.402961   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.443216   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.523501   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:10.683925   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:11.004658   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:11.646223   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:12.927110   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:15.488911   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:24:20.609395   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.017962648s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons disable ingress-dns --alsologtostderr -v=1: (1.777027826s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons disable ingress --alsologtostderr -v=1
E0817 21:24:30.849937   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons disable ingress --alsologtostderr -v=1: (7.396329746s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-997484
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-997484:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264",
	        "Created": "2023-08-17T21:20:14.463895394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 57130,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:20:14.723654642Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264/hostname",
	        "HostsPath": "/var/lib/docker/containers/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264/hosts",
	        "LogPath": "/var/lib/docker/containers/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264-json.log",
	        "Name": "/ingress-addon-legacy-997484",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-997484:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-997484",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ac2672e91db3e1dcad6dea317a1d4ec7d6a6ff2abb9117b51545d69aec060e23-init/diff:/var/lib/docker/overlay2/4fa4181e3bc5ec3351265343644d26aad7e77680fc05db63fc4bb2710b90d29d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac2672e91db3e1dcad6dea317a1d4ec7d6a6ff2abb9117b51545d69aec060e23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac2672e91db3e1dcad6dea317a1d4ec7d6a6ff2abb9117b51545d69aec060e23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac2672e91db3e1dcad6dea317a1d4ec7d6a6ff2abb9117b51545d69aec060e23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-997484",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-997484/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-997484",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-997484",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-997484",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d9684d01eaf38837d154044021c6025a2e1216c0ddbb41a27f9da45bb0e833fb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d9684d01eaf3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-997484": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de7d7df359b2",
	                        "ingress-addon-legacy-997484"
	                    ],
	                    "NetworkID": "3eb57c0e3a8e00a48d60265e87a9fcf3731c4cbdf2660dcc016835c10a7caa0c",
	                    "EndpointID": "eccfe75be2f37a985928104418b55eb484f1eb97239325976de7ceb52c90322f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-997484 -n ingress-addon-legacy-997484
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-997484 logs -n 25: (1.017961127s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-702251 image ls                                                   | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	| image          | functional-702251 image save                                                 | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-702251                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251 image rm                                                   | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-702251                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251 image ls                                                   | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	| image          | functional-702251 image load                                                 | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251 image ls                                                   | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	| image          | functional-702251 image save --daemon                                        | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-702251                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-702251 ssh pgrep                                                  | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-702251                                                            | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-702251 image build -t                                             | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	|                | localhost/my-image:functional-702251                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-702251 image ls                                                   | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	| delete         | -p functional-702251                                                         | functional-702251           | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:19 UTC |
	| start          | -p ingress-addon-legacy-997484                                               | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:19 UTC | 17 Aug 23 21:21 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-997484                                                  | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-997484                                                  | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC | 17 Aug 23 21:21 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-997484                                                  | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:21 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-997484 ip                                               | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:24 UTC | 17 Aug 23 21:24 UTC |
	| addons         | ingress-addon-legacy-997484                                                  | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:24 UTC | 17 Aug 23 21:24 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-997484                                                  | ingress-addon-legacy-997484 | jenkins | v1.31.2 | 17 Aug 23 21:24 UTC | 17 Aug 23 21:24 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:19:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:19:59.324045   56500 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:19:59.324182   56500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:59.324192   56500 out.go:309] Setting ErrFile to fd 2...
	I0817 21:19:59.324199   56500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:59.324406   56500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:19:59.325005   56500 out.go:303] Setting JSON to false
	I0817 21:19:59.326426   56500 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3748,"bootTime":1692303452,"procs":799,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:19:59.326484   56500 start.go:138] virtualization: kvm guest
	I0817 21:19:59.328820   56500 out.go:177] * [ingress-addon-legacy-997484] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:19:59.330297   56500 notify.go:220] Checking for updates...
	I0817 21:19:59.330313   56500 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:19:59.331781   56500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:19:59.333059   56500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:19:59.334285   56500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:19:59.335694   56500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:19:59.337007   56500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:19:59.338339   56500 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:19:59.358936   56500 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:19:59.359029   56500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:19:59.416360   56500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-17 21:19:59.407798485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:19:59.416475   56500 docker.go:294] overlay module found
	I0817 21:19:59.418474   56500 out.go:177] * Using the docker driver based on user configuration
	I0817 21:19:59.419963   56500 start.go:298] selected driver: docker
	I0817 21:19:59.419977   56500 start.go:902] validating driver "docker" against <nil>
	I0817 21:19:59.419990   56500 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:19:59.420724   56500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:19:59.471680   56500 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-17 21:19:59.463755898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:19:59.471852   56500 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:19:59.472048   56500 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:19:59.473975   56500 out.go:177] * Using Docker driver with root privileges
	I0817 21:19:59.475421   56500 cni.go:84] Creating CNI manager for ""
	I0817 21:19:59.475442   56500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:19:59.475456   56500 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:19:59.475471   56500 start_flags.go:319] config:
	{Name:ingress-addon-legacy-997484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-997484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:19:59.477163   56500 out.go:177] * Starting control plane node ingress-addon-legacy-997484 in cluster ingress-addon-legacy-997484
	I0817 21:19:59.478571   56500 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:19:59.479981   56500 out.go:177] * Pulling base image ...
	I0817 21:19:59.481355   56500 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:19:59.481461   56500 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:19:59.497444   56500 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:19:59.497465   56500 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:19:59.510992   56500 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0817 21:19:59.511007   56500 cache.go:57] Caching tarball of preloaded images
	I0817 21:19:59.511164   56500 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:19:59.512963   56500 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0817 21:19:59.514343   56500 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:19:59.556226   56500 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0817 21:20:06.364172   56500 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:20:06.364259   56500 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:20:07.316123   56500 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0817 21:20:07.316439   56500 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/config.json ...
	I0817 21:20:07.316471   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/config.json: {Name:mk439f7e39faf49426024f56cb97d36a8ee66bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:07.316626   56500 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:20:07.316654   56500 start.go:365] acquiring machines lock for ingress-addon-legacy-997484: {Name:mk74b2acfe783788b2ada515286c59b1f3beef8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:20:07.316694   56500 start.go:369] acquired machines lock for "ingress-addon-legacy-997484" in 28.96µs
	I0817 21:20:07.316712   56500 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-997484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-997484 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:20:07.316767   56500 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:20:07.319153   56500 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0817 21:20:07.319421   56500 start.go:159] libmachine.API.Create for "ingress-addon-legacy-997484" (driver="docker")
	I0817 21:20:07.319454   56500 client.go:168] LocalClient.Create starting
	I0817 21:20:07.319509   56500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem
	I0817 21:20:07.319538   56500 main.go:141] libmachine: Decoding PEM data...
	I0817 21:20:07.319553   56500 main.go:141] libmachine: Parsing certificate...
	I0817 21:20:07.319610   56500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem
	I0817 21:20:07.319628   56500 main.go:141] libmachine: Decoding PEM data...
	I0817 21:20:07.319638   56500 main.go:141] libmachine: Parsing certificate...
	I0817 21:20:07.319904   56500 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-997484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:20:07.334931   56500 cli_runner.go:211] docker network inspect ingress-addon-legacy-997484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:20:07.334994   56500 network_create.go:281] running [docker network inspect ingress-addon-legacy-997484] to gather additional debugging logs...
	I0817 21:20:07.335007   56500 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-997484
	W0817 21:20:07.349350   56500 cli_runner.go:211] docker network inspect ingress-addon-legacy-997484 returned with exit code 1
	I0817 21:20:07.349374   56500 network_create.go:284] error running [docker network inspect ingress-addon-legacy-997484]: docker network inspect ingress-addon-legacy-997484: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-997484 not found
	I0817 21:20:07.349387   56500 network_create.go:286] output of [docker network inspect ingress-addon-legacy-997484]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-997484 not found
	
	** /stderr **
	I0817 21:20:07.349425   56500 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:20:07.365134   56500 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b98a70}
	I0817 21:20:07.365164   56500 network_create.go:123] attempt to create docker network ingress-addon-legacy-997484 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0817 21:20:07.365209   56500 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-997484 ingress-addon-legacy-997484
	I0817 21:20:07.416435   56500 network_create.go:107] docker network ingress-addon-legacy-997484 192.168.49.0/24 created
	I0817 21:20:07.416464   56500 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-997484" container
	I0817 21:20:07.416520   56500 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:20:07.430712   56500 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-997484 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-997484 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:20:07.447510   56500 oci.go:103] Successfully created a docker volume ingress-addon-legacy-997484
	I0817 21:20:07.447593   56500 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-997484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-997484 --entrypoint /usr/bin/test -v ingress-addon-legacy-997484:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:20:09.166241   56500 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-997484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-997484 --entrypoint /usr/bin/test -v ingress-addon-legacy-997484:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.718600883s)
	I0817 21:20:09.166278   56500 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-997484
	I0817 21:20:09.166318   56500 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:20:09.166349   56500 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:20:09.166412   56500 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-997484:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:20:14.398553   56500 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-997484:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.232089119s)
	I0817 21:20:14.398583   56500 kic.go:199] duration metric: took 5.232229 seconds to extract preloaded images to volume
	W0817 21:20:14.398717   56500 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:20:14.398827   56500 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:20:14.449472   56500 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-997484 --name ingress-addon-legacy-997484 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-997484 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-997484 --network ingress-addon-legacy-997484 --ip 192.168.49.2 --volume ingress-addon-legacy-997484:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:20:14.732442   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Running}}
	I0817 21:20:14.750658   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:14.768003   56500 cli_runner.go:164] Run: docker exec ingress-addon-legacy-997484 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:20:14.818978   56500 oci.go:144] the created container "ingress-addon-legacy-997484" has a running status.
	I0817 21:20:14.819009   56500 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa...
	I0817 21:20:15.004284   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0817 21:20:15.004340   56500 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:20:15.027396   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:15.050992   56500 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:20:15.051016   56500 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-997484 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:20:15.129611   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:15.151747   56500 machine.go:88] provisioning docker machine ...
	I0817 21:20:15.151797   56500 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-997484"
	I0817 21:20:15.151856   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:15.171309   56500 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:15.171897   56500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:15.171926   56500 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-997484 && echo "ingress-addon-legacy-997484" | sudo tee /etc/hostname
	I0817 21:20:15.404151   56500 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-997484
	
	I0817 21:20:15.404240   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:15.421915   56500 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:15.422325   56500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:15.422354   56500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-997484' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-997484/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-997484' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:20:15.545873   56500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:20:15.545922   56500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:20:15.545958   56500 ubuntu.go:177] setting up certificates
	I0817 21:20:15.545971   56500 provision.go:83] configureAuth start
	I0817 21:20:15.546036   56500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-997484
	I0817 21:20:15.562900   56500 provision.go:138] copyHostCerts
	I0817 21:20:15.562933   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:20:15.562962   56500 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem, removing ...
	I0817 21:20:15.562967   56500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:20:15.563030   56500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:20:15.563130   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:20:15.563150   56500 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem, removing ...
	I0817 21:20:15.563154   56500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:20:15.563180   56500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:20:15.563227   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:20:15.563244   56500 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem, removing ...
	I0817 21:20:15.563250   56500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:20:15.563269   56500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:20:15.563324   56500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-997484 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-997484]
	I0817 21:20:15.818987   56500 provision.go:172] copyRemoteCerts
	I0817 21:20:15.819044   56500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:20:15.819079   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:15.835709   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:15.926585   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:20:15.926648   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0817 21:20:15.946759   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:20:15.946819   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:20:15.967209   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:20:15.967268   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:20:15.987451   56500 provision.go:86] duration metric: configureAuth took 441.465249ms
	I0817 21:20:15.987474   56500 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:20:15.987664   56500 config.go:182] Loaded profile config "ingress-addon-legacy-997484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0817 21:20:15.987771   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:16.004448   56500 main.go:141] libmachine: Using SSH client type: native
	I0817 21:20:16.004887   56500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0817 21:20:16.004917   56500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:20:16.237480   56500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:20:16.237509   56500 machine.go:91] provisioned docker machine in 1.085741523s
	I0817 21:20:16.237521   56500 client.go:171] LocalClient.Create took 8.918058017s
	I0817 21:20:16.237545   56500 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-997484" took 8.91812153s
	I0817 21:20:16.237559   56500 start.go:300] post-start starting for "ingress-addon-legacy-997484" (driver="docker")
	I0817 21:20:16.237572   56500 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:20:16.237663   56500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:20:16.237719   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:16.254068   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:16.346945   56500 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:20:16.350095   56500 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:20:16.350130   56500 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:20:16.350140   56500 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:20:16.350145   56500 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:20:16.350153   56500 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:20:16.350210   56500 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:20:16.350279   56500 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> 175042.pem in /etc/ssl/certs
	I0817 21:20:16.350289   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /etc/ssl/certs/175042.pem
	I0817 21:20:16.350371   56500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:20:16.357852   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:20:16.378067   56500 start.go:303] post-start completed in 140.49398ms
	I0817 21:20:16.378418   56500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-997484
	I0817 21:20:16.394556   56500 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/config.json ...
	I0817 21:20:16.394817   56500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:20:16.394864   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:16.411844   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:16.498545   56500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:20:16.502445   56500 start.go:128] duration metric: createHost completed in 9.185663813s
	I0817 21:20:16.502471   56500 start.go:83] releasing machines lock for "ingress-addon-legacy-997484", held for 9.18576482s
	I0817 21:20:16.502547   56500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-997484
	I0817 21:20:16.518599   56500 ssh_runner.go:195] Run: cat /version.json
	I0817 21:20:16.518657   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:16.518699   56500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:20:16.518758   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:16.534848   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:16.535034   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:16.710742   56500 ssh_runner.go:195] Run: systemctl --version
	I0817 21:20:16.714871   56500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:20:16.849878   56500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:20:16.854008   56500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:20:16.871479   56500 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:20:16.871552   56500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:20:16.897542   56500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:20:16.897565   56500 start.go:466] detecting cgroup driver to use...
	I0817 21:20:16.897592   56500 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:20:16.897634   56500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:20:16.910623   56500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:20:16.920038   56500 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:20:16.920082   56500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:20:16.931483   56500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:20:16.943674   56500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:20:17.017379   56500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:20:17.092421   56500 docker.go:212] disabling docker service ...
	I0817 21:20:17.092479   56500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:20:17.109477   56500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:20:17.119486   56500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:20:17.193062   56500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:20:17.269681   56500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:20:17.279496   56500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:20:17.293256   56500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0817 21:20:17.293309   56500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:20:17.301506   56500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:20:17.301560   56500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:20:17.309580   56500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:20:17.317679   56500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:20:17.325674   56500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:20:17.333037   56500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:20:17.340100   56500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:20:17.347205   56500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:20:17.420149   56500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:20:17.514911   56500 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:20:17.514970   56500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:20:17.518260   56500 start.go:534] Will wait 60s for crictl version
	I0817 21:20:17.518304   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:17.521129   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:20:17.553098   56500 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0817 21:20:17.553174   56500 ssh_runner.go:195] Run: crio --version
	I0817 21:20:17.585816   56500 ssh_runner.go:195] Run: crio --version
	I0817 21:20:17.619133   56500 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0817 21:20:17.620859   56500 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-997484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:20:17.637317   56500 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0817 21:20:17.640622   56500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:20:17.650244   56500 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0817 21:20:17.650297   56500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:17.691489   56500 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:20:17.691546   56500 ssh_runner.go:195] Run: which lz4
	I0817 21:20:17.694974   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0817 21:20:17.695061   56500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0817 21:20:17.698069   56500 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0817 21:20:17.698096   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0817 21:20:18.625356   56500 crio.go:444] Took 0.930318 seconds to copy over tarball
	I0817 21:20:18.625440   56500 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0817 21:20:20.831001   56500 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.205528854s)
	I0817 21:20:20.831032   56500 crio.go:451] Took 2.205648 seconds to extract the tarball
	I0817 21:20:20.831041   56500 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0817 21:20:20.899070   56500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:20:20.930012   56500 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0817 21:20:20.930036   56500 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0817 21:20:20.930089   56500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:20.930130   56500 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:20:20.930147   56500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:20:20.930160   56500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:20:20.930176   56500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:20:20.930139   56500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:20:20.930271   56500 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0817 21:20:20.930277   56500 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0817 21:20:20.931347   56500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:20:20.931371   56500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:20:20.931381   56500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:20:20.931357   56500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:20:20.931374   56500 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0817 21:20:20.931432   56500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:20:20.931403   56500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:20.931671   56500 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0817 21:20:21.134461   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0817 21:20:21.140353   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0817 21:20:21.167063   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:21.170341   56500 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0817 21:20:21.170426   56500 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0817 21:20:21.170488   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.177990   56500 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0817 21:20:21.178031   56500 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0817 21:20:21.178069   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.181418   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0817 21:20:21.196883   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:20:21.202690   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:20:21.226703   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:20:21.256452   56500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:20:21.305130   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0817 21:20:21.305142   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0817 21:20:21.305202   56500 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0817 21:20:21.305228   56500 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0817 21:20:21.305261   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.305288   56500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0817 21:20:21.305311   56500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:20:21.305345   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.305357   56500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0817 21:20:21.305387   56500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:20:21.305407   56500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0817 21:20:21.305425   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.305428   56500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:20:21.305448   56500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0817 21:20:21.305454   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.305465   56500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:20:21.305493   56500 ssh_runner.go:195] Run: which crictl
	I0817 21:20:21.352453   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0817 21:20:21.352556   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0817 21:20:21.352583   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0817 21:20:21.352656   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0817 21:20:21.352756   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0817 21:20:21.352802   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0817 21:20:21.352820   56500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0817 21:20:21.442086   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0817 21:20:21.443849   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0817 21:20:21.446619   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0817 21:20:21.446687   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0817 21:20:21.446718   56500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0817 21:20:21.446756   56500 cache_images.go:92] LoadImages completed in 516.708421ms
	W0817 21:20:21.446838   56500 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0817 21:20:21.446909   56500 ssh_runner.go:195] Run: crio config
	I0817 21:20:21.524944   56500 cni.go:84] Creating CNI manager for ""
	I0817 21:20:21.524966   56500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:20:21.524983   56500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:20:21.525000   56500 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-997484 NodeName:ingress-addon-legacy-997484 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0817 21:20:21.525125   56500 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-997484"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:20:21.525196   56500 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-997484 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-997484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:20:21.525243   56500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0817 21:20:21.533162   56500 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:20:21.533240   56500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:20:21.540762   56500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0817 21:20:21.555525   56500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0817 21:20:21.570464   56500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0817 21:20:21.585336   56500 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:20:21.588354   56500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:20:21.597791   56500 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484 for IP: 192.168.49.2
	I0817 21:20:21.597828   56500 certs.go:190] acquiring lock for shared ca certs: {Name:mkccb042866dbfd72de305663f91f6bc6da7b7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:21.597990   56500 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key
	I0817 21:20:21.598032   56500 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key
	I0817 21:20:21.598074   56500 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key
	I0817 21:20:21.598088   56500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt with IP's: []
	I0817 21:20:21.812111   56500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt ...
	I0817 21:20:21.812146   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: {Name:mke2d6b2a1074644dc219975966934eb38a28fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:21.812345   56500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key ...
	I0817 21:20:21.812361   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key: {Name:mkc36727cb639774f1b5533f8159ba1a3f1b365c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:21.812463   56500 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key.dd3b5fb2
	I0817 21:20:21.812483   56500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:20:21.975798   56500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt.dd3b5fb2 ...
	I0817 21:20:21.975830   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt.dd3b5fb2: {Name:mk212bd6429534eb2954418d1f4bd059cb0abcc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:21.976019   56500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key.dd3b5fb2 ...
	I0817 21:20:21.976035   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key.dd3b5fb2: {Name:mk69a21a777cddcb4d4cf8cc9e0d9a0bc73af046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:21.976115   56500 certs.go:337] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt
	I0817 21:20:21.976194   56500 certs.go:341] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key
	I0817 21:20:21.976243   56500 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.key
	I0817 21:20:21.976256   56500 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.crt with IP's: []
	I0817 21:20:22.042043   56500 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.crt ...
	I0817 21:20:22.042073   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.crt: {Name:mk7499bf89b9df08d984897dcdd0b03323ffc967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:22.042262   56500 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.key ...
	I0817 21:20:22.042280   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.key: {Name:mkda0fa79e167023f6ae19558b883cbfff33e993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:22.042378   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:20:22.042404   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:20:22.042414   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:20:22.042426   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:20:22.042438   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:20:22.042447   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:20:22.042460   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:20:22.042470   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:20:22.042518   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem (1338 bytes)
	W0817 21:20:22.042552   56500 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504_empty.pem, impossibly tiny 0 bytes
	I0817 21:20:22.042562   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:20:22.042583   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:20:22.042606   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:20:22.042636   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem (1679 bytes)
	I0817 21:20:22.042681   56500 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:20:22.042706   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem -> /usr/share/ca-certificates/17504.pem
	I0817 21:20:22.042718   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /usr/share/ca-certificates/175042.pem
	I0817 21:20:22.042733   56500 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:22.043296   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:20:22.064866   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0817 21:20:22.085430   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:20:22.106282   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0817 21:20:22.126700   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:20:22.146649   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 21:20:22.166562   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:20:22.186208   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 21:20:22.205915   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem --> /usr/share/ca-certificates/17504.pem (1338 bytes)
	I0817 21:20:22.226393   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /usr/share/ca-certificates/175042.pem (1708 bytes)
	I0817 21:20:22.246538   56500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:20:22.266663   56500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:20:22.281298   56500 ssh_runner.go:195] Run: openssl version
	I0817 21:20:22.286070   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17504.pem && ln -fs /usr/share/ca-certificates/17504.pem /etc/ssl/certs/17504.pem"
	I0817 21:20:22.293851   56500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17504.pem
	I0817 21:20:22.296865   56500 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:16 /usr/share/ca-certificates/17504.pem
	I0817 21:20:22.296914   56500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17504.pem
	I0817 21:20:22.302908   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17504.pem /etc/ssl/certs/51391683.0"
	I0817 21:20:22.310940   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175042.pem && ln -fs /usr/share/ca-certificates/175042.pem /etc/ssl/certs/175042.pem"
	I0817 21:20:22.318724   56500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175042.pem
	I0817 21:20:22.321649   56500 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:16 /usr/share/ca-certificates/175042.pem
	I0817 21:20:22.321680   56500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175042.pem
	I0817 21:20:22.327695   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175042.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:20:22.335672   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:20:22.343267   56500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:22.346106   56500 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:22.346147   56500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:20:22.351946   56500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:20:22.359800   56500 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:20:22.362627   56500 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:20:22.362671   56500 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-997484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-997484 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:20:22.362743   56500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:20:22.362771   56500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:20:22.394535   56500 cri.go:89] found id: ""
	I0817 21:20:22.394597   56500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:20:22.402263   56500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:20:22.409841   56500 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0817 21:20:22.409881   56500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:20:22.417226   56500 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:20:22.417263   56500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 21:20:22.458106   56500 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0817 21:20:22.458168   56500 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:20:22.496406   56500 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:20:22.496538   56500 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0817 21:20:22.496605   56500 kubeadm.go:322] OS: Linux
	I0817 21:20:22.496659   56500 kubeadm.go:322] CGROUPS_CPU: enabled
	I0817 21:20:22.496701   56500 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0817 21:20:22.496761   56500 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0817 21:20:22.496831   56500 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0817 21:20:22.496905   56500 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0817 21:20:22.496972   56500 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0817 21:20:22.561878   56500 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:20:22.562020   56500 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:20:22.562161   56500 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:20:22.737489   56500 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:20:22.738459   56500 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:20:22.738529   56500 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:20:22.808202   56500 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:20:22.811404   56500 out.go:204]   - Generating certificates and keys ...
	I0817 21:20:22.811544   56500 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:20:22.811651   56500 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:20:22.996767   56500 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:20:23.262627   56500 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:20:23.383993   56500 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:20:23.668022   56500 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:20:23.769931   56500 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:20:23.770137   56500 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-997484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:20:24.151112   56500 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:20:24.151284   56500 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-997484 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0817 21:20:24.387875   56500 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:20:24.500889   56500 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:20:24.653437   56500 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:20:24.653500   56500 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:20:24.877436   56500 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:20:25.118795   56500 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:20:25.259376   56500 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:20:25.323093   56500 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:20:25.323754   56500 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:20:25.326155   56500 out.go:204]   - Booting up control plane ...
	I0817 21:20:25.326259   56500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:20:25.329266   56500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:20:25.330195   56500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:20:25.330859   56500 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:20:25.333603   56500 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:20:32.335930   56500 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002309 seconds
	I0817 21:20:32.336100   56500 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:20:32.345538   56500 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:20:32.861767   56500 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:20:32.861994   56500 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-997484 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0817 21:20:33.368363   56500 kubeadm.go:322] [bootstrap-token] Using token: qeyau9.c234wbfum83jq5lv
	I0817 21:20:33.369945   56500 out.go:204]   - Configuring RBAC rules ...
	I0817 21:20:33.370084   56500 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:20:33.373098   56500 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:20:33.379152   56500 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:20:33.380806   56500 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:20:33.382600   56500 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:20:33.384238   56500 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:20:33.391351   56500 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:20:33.615140   56500 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:20:33.782435   56500 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:20:33.783719   56500 kubeadm.go:322] 
	I0817 21:20:33.783807   56500 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:20:33.783823   56500 kubeadm.go:322] 
	I0817 21:20:33.783914   56500 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:20:33.783922   56500 kubeadm.go:322] 
	I0817 21:20:33.783952   56500 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:20:33.784036   56500 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:20:33.784103   56500 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:20:33.784112   56500 kubeadm.go:322] 
	I0817 21:20:33.784176   56500 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:20:33.784259   56500 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:20:33.784347   56500 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:20:33.784356   56500 kubeadm.go:322] 
	I0817 21:20:33.784433   56500 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:20:33.784534   56500 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:20:33.784547   56500 kubeadm.go:322] 
	I0817 21:20:33.784643   56500 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qeyau9.c234wbfum83jq5lv \
	I0817 21:20:33.784738   56500 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 \
	I0817 21:20:33.784761   56500 kubeadm.go:322]     --control-plane 
	I0817 21:20:33.784777   56500 kubeadm.go:322] 
	I0817 21:20:33.784879   56500 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:20:33.784888   56500 kubeadm.go:322] 
	I0817 21:20:33.784984   56500 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qeyau9.c234wbfum83jq5lv \
	I0817 21:20:33.785100   56500 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 
	I0817 21:20:33.786703   56500 kubeadm.go:322] W0817 21:20:22.457531    1371 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0817 21:20:33.786993   56500 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0817 21:20:33.787101   56500 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:20:33.787265   56500 kubeadm.go:322] W0817 21:20:25.328952    1371 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:20:33.787444   56500 kubeadm.go:322] W0817 21:20:25.329994    1371 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0817 21:20:33.787477   56500 cni.go:84] Creating CNI manager for ""
	I0817 21:20:33.787495   56500 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:20:33.789147   56500 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:20:33.790467   56500 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:20:33.794305   56500 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0817 21:20:33.794322   56500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:20:33.810676   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:20:34.239518   56500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:20:34.239594   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:34.239621   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=ingress-addon-legacy-997484 minikube.k8s.io/updated_at=2023_08_17T21_20_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:34.346571   56500 ops.go:34] apiserver oom_adj: -16
	I0817 21:20:34.346734   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:34.409856   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:34.983018   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:35.483114   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:35.983114   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:36.483124   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:36.983429   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:37.482831   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:37.982767   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:38.483158   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:38.983112   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:39.483131   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:39.982969   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:40.483130   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:40.983040   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:41.482785   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:41.982542   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:42.482805   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:42.982988   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:43.483045   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:43.982815   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:44.482762   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:44.983105   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:45.482710   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:45.983102   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:46.483391   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:46.982694   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:47.483108   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:47.982588   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:48.483092   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:48.983043   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:49.482856   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:49.982507   56500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:20:50.059944   56500 kubeadm.go:1081] duration metric: took 15.820416741s to wait for elevateKubeSystemPrivileges.
	I0817 21:20:50.059980   56500 kubeadm.go:406] StartCluster complete in 27.697311595s
	I0817 21:20:50.060035   56500 settings.go:142] acquiring lock: {Name:mkab7abc846835e928b69a2120c7e34b55f8acdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:50.060112   56500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:20:50.060788   56500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/kubeconfig: {Name:mk8d25353b4b324f395053b70676ed1b624da94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:20:50.061033   56500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:20:50.061037   56500 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:20:50.061108   56500 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-997484"
	I0817 21:20:50.061118   56500 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-997484"
	I0817 21:20:50.061130   56500 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-997484"
	I0817 21:20:50.061133   56500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-997484"
	I0817 21:20:50.061185   56500 host.go:66] Checking if "ingress-addon-legacy-997484" exists ...
	I0817 21:20:50.061249   56500 config.go:182] Loaded profile config "ingress-addon-legacy-997484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0817 21:20:50.061483   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:50.061690   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:50.061674   56500 kapi.go:59] client config for ingress-addon-legacy-997484: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:20:50.062690   56500 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:20:50.078737   56500 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-997484" context rescaled to 1 replicas
	I0817 21:20:50.078788   56500 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:20:50.080580   56500 out.go:177] * Verifying Kubernetes components...
	I0817 21:20:50.082433   56500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:20:50.083892   56500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:20:50.083476   56500 kapi.go:59] client config for ingress-addon-legacy-997484: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:20:50.085298   56500 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:20:50.085316   56500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:20:50.085364   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:50.089737   56500 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-997484"
	I0817 21:20:50.089781   56500 host.go:66] Checking if "ingress-addon-legacy-997484" exists ...
	I0817 21:20:50.090288   56500 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-997484 --format={{.State.Status}}
	I0817 21:20:50.105384   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:50.108167   56500 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:20:50.108190   56500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:20:50.108246   56500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-997484
	I0817 21:20:50.124294   56500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/ingress-addon-legacy-997484/id_rsa Username:docker}
	I0817 21:20:50.168477   56500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:20:50.169037   56500 kapi.go:59] client config for ingress-addon-legacy-997484: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:20:50.169301   56500 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-997484" to be "Ready" ...
	I0817 21:20:50.243148   56500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:20:50.327200   56500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:20:50.747322   56500 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0817 21:20:50.880809   56500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0817 21:20:50.882114   56500 addons.go:502] enable addons completed in 821.085076ms: enabled=[default-storageclass storage-provisioner]
	I0817 21:20:52.177153   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:20:54.350170   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:20:56.677668   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:20:58.677758   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:21:01.178139   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:21:03.677672   56500 node_ready.go:58] node "ingress-addon-legacy-997484" has status "Ready":"False"
	I0817 21:21:04.177576   56500 node_ready.go:49] node "ingress-addon-legacy-997484" has status "Ready":"True"
	I0817 21:21:04.177604   56500 node_ready.go:38] duration metric: took 14.008267869s waiting for node "ingress-addon-legacy-997484" to be "Ready" ...
	I0817 21:21:04.177615   56500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:21:04.185327   56500 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5hqff" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:06.192663   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:20:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0817 21:21:08.691860   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-17 21:20:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0817 21:21:10.694778   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace has status "Ready":"False"
	I0817 21:21:13.194264   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace has status "Ready":"False"
	I0817 21:21:15.195986   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace has status "Ready":"False"
	I0817 21:21:17.694845   56500 pod_ready.go:102] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace has status "Ready":"False"
	I0817 21:21:18.694959   56500 pod_ready.go:92] pod "coredns-66bff467f8-5hqff" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:18.694984   56500 pod_ready.go:81] duration metric: took 14.509634675s waiting for pod "coredns-66bff467f8-5hqff" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.694997   56500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.698841   56500 pod_ready.go:92] pod "etcd-ingress-addon-legacy-997484" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:18.698856   56500 pod_ready.go:81] duration metric: took 3.85274ms waiting for pod "etcd-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.698866   56500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.702462   56500 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-997484" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:18.702477   56500 pod_ready.go:81] duration metric: took 3.6048ms waiting for pod "kube-apiserver-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.702486   56500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.706202   56500 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-997484" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:18.706221   56500 pod_ready.go:81] duration metric: took 3.727739ms waiting for pod "kube-controller-manager-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.706230   56500 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vjj9q" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.709709   56500 pod_ready.go:92] pod "kube-proxy-vjj9q" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:18.709727   56500 pod_ready.go:81] duration metric: took 3.491326ms waiting for pod "kube-proxy-vjj9q" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.709737   56500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:18.891113   56500 request.go:628] Waited for 181.308111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-997484
	I0817 21:21:19.091048   56500 request.go:628] Waited for 197.375195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-997484
	I0817 21:21:19.093631   56500 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-997484" in "kube-system" namespace has status "Ready":"True"
	I0817 21:21:19.093651   56500 pod_ready.go:81] duration metric: took 383.907688ms waiting for pod "kube-scheduler-ingress-addon-legacy-997484" in "kube-system" namespace to be "Ready" ...
	I0817 21:21:19.093665   56500 pod_ready.go:38] duration metric: took 14.916038688s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:21:19.093682   56500 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:21:19.093725   56500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:21:19.104077   56500 api_server.go:72] duration metric: took 29.025252604s to wait for apiserver process to appear ...
	I0817 21:21:19.104104   56500 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:21:19.104121   56500 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0817 21:21:19.108795   56500 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0817 21:21:19.109558   56500 api_server.go:141] control plane version: v1.18.20
	I0817 21:21:19.109582   56500 api_server.go:131] duration metric: took 5.470823ms to wait for apiserver health ...
	I0817 21:21:19.109591   56500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:21:19.290994   56500 request.go:628] Waited for 181.312368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:21:19.295879   56500 system_pods.go:59] 8 kube-system pods found
	I0817 21:21:19.295903   56500 system_pods.go:61] "coredns-66bff467f8-5hqff" [ba04f122-7dd0-4bc2-a412-f8678a855dfb] Running
	I0817 21:21:19.295908   56500 system_pods.go:61] "etcd-ingress-addon-legacy-997484" [61311eea-1152-40bc-bd18-789de5523d40] Running
	I0817 21:21:19.295912   56500 system_pods.go:61] "kindnet-65b86" [dfb9f8d1-5c72-47f3-9b15-6c25b6514947] Running
	I0817 21:21:19.295916   56500 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-997484" [60ee76ed-fa17-496e-82f6-6beba879945c] Running
	I0817 21:21:19.295920   56500 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-997484" [6d2b3774-0778-40a3-aee7-206931ac1958] Running
	I0817 21:21:19.295925   56500 system_pods.go:61] "kube-proxy-vjj9q" [daf5112a-abf2-4a48-9cc1-7a11b2db045d] Running
	I0817 21:21:19.295929   56500 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-997484" [67a72183-b588-4e77-9c6c-76dab370f95f] Running
	I0817 21:21:19.295933   56500 system_pods.go:61] "storage-provisioner" [0f3bf25b-1b2d-45b0-85a5-8c50055d52eb] Running
	I0817 21:21:19.295938   56500 system_pods.go:74] duration metric: took 186.342639ms to wait for pod list to return data ...
	I0817 21:21:19.295947   56500 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:21:19.490715   56500 request.go:628] Waited for 194.705473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:21:19.492980   56500 default_sa.go:45] found service account: "default"
	I0817 21:21:19.492999   56500 default_sa.go:55] duration metric: took 197.047268ms for default service account to be created ...
	I0817 21:21:19.493005   56500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:21:19.690342   56500 request.go:628] Waited for 197.264157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:21:19.695477   56500 system_pods.go:86] 8 kube-system pods found
	I0817 21:21:19.695500   56500 system_pods.go:89] "coredns-66bff467f8-5hqff" [ba04f122-7dd0-4bc2-a412-f8678a855dfb] Running
	I0817 21:21:19.695506   56500 system_pods.go:89] "etcd-ingress-addon-legacy-997484" [61311eea-1152-40bc-bd18-789de5523d40] Running
	I0817 21:21:19.695510   56500 system_pods.go:89] "kindnet-65b86" [dfb9f8d1-5c72-47f3-9b15-6c25b6514947] Running
	I0817 21:21:19.695517   56500 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-997484" [60ee76ed-fa17-496e-82f6-6beba879945c] Running
	I0817 21:21:19.695522   56500 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-997484" [6d2b3774-0778-40a3-aee7-206931ac1958] Running
	I0817 21:21:19.695525   56500 system_pods.go:89] "kube-proxy-vjj9q" [daf5112a-abf2-4a48-9cc1-7a11b2db045d] Running
	I0817 21:21:19.695529   56500 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-997484" [67a72183-b588-4e77-9c6c-76dab370f95f] Running
	I0817 21:21:19.695533   56500 system_pods.go:89] "storage-provisioner" [0f3bf25b-1b2d-45b0-85a5-8c50055d52eb] Running
	I0817 21:21:19.695539   56500 system_pods.go:126] duration metric: took 202.530194ms to wait for k8s-apps to be running ...
	I0817 21:21:19.695546   56500 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:21:19.695583   56500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:21:19.707854   56500 system_svc.go:56] duration metric: took 12.298354ms WaitForService to wait for kubelet.
	I0817 21:21:19.707881   56500 kubeadm.go:581] duration metric: took 29.629061946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:21:19.707912   56500 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:21:19.890290   56500 request.go:628] Waited for 182.285304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0817 21:21:19.893077   56500 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0817 21:21:19.893099   56500 node_conditions.go:123] node cpu capacity is 8
	I0817 21:21:19.893108   56500 node_conditions.go:105] duration metric: took 185.191613ms to run NodePressure ...
	I0817 21:21:19.893117   56500 start.go:228] waiting for startup goroutines ...
	I0817 21:21:19.893123   56500 start.go:233] waiting for cluster config update ...
	I0817 21:21:19.893132   56500 start.go:242] writing updated cluster config ...
	I0817 21:21:19.893405   56500 ssh_runner.go:195] Run: rm -f paused
	I0817 21:21:19.937732   56500 start.go:600] kubectl: 1.28.0, cluster: 1.18.20 (minor skew: 10)
	I0817 21:21:19.939852   56500 out.go:177] 
	W0817 21:21:19.941516   56500 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0817 21:21:19.942970   56500 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0817 21:21:19.945856   56500 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-997484" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 17 21:24:29 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:29.319006195Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-6bpdm from CNI network \"kindnet\" (type=ptp)"
	Aug 17 21:24:29 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:29.359173125Z" level=info msg="Stopped pod sandbox: 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=1780117d-356d-433d-8be0-ea6631d18646 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:29 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:29.359298155Z" level=info msg="Stopped pod sandbox (already stopped): 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=114db402-b137-483b-9e8f-d140520189f2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:33 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:33.952704954Z" level=info msg="Removing container: 41ee30fab766646c707bcff84718940dd50e24300fdeb3d3bad04294368dfa25" id=2ec31e64-d0aa-4966-81b3-ffd16e5a68ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:33 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:33.967101151Z" level=info msg="Removed container 41ee30fab766646c707bcff84718940dd50e24300fdeb3d3bad04294368dfa25: ingress-nginx/ingress-nginx-controller-7fcf777cb7-6bpdm/controller" id=2ec31e64-d0aa-4966-81b3-ffd16e5a68ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:33 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:33.968254156Z" level=info msg="Removing container: 3090cf900f7bda3862bc233bd57a26f973144860724c43ec677f4d4c68c0183a" id=da425ff2-f3ef-478c-bd13-e817f3e193f7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:33 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:33.982361306Z" level=info msg="Removed container 3090cf900f7bda3862bc233bd57a26f973144860724c43ec677f4d4c68c0183a: ingress-nginx/ingress-nginx-admission-patch-ff4s5/patch" id=da425ff2-f3ef-478c-bd13-e817f3e193f7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:33 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:33.983259825Z" level=info msg="Removing container: 864d6fb59804ddb6a6b300d154332ac7678bfc632645241de37c74cb5461f70f" id=748cd1c3-8c88-479d-b2b8-4be575472a7e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.027106100Z" level=info msg="Removed container 864d6fb59804ddb6a6b300d154332ac7678bfc632645241de37c74cb5461f70f: ingress-nginx/ingress-nginx-admission-create-zj26f/create" id=748cd1c3-8c88-479d-b2b8-4be575472a7e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.028515622Z" level=info msg="Stopping pod sandbox: 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=57c2088f-9f65-45f0-b8d4-b89ce7b8b519 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.028554405Z" level=info msg="Stopped pod sandbox (already stopped): 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=57c2088f-9f65-45f0-b8d4-b89ce7b8b519 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.028899559Z" level=info msg="Removing pod sandbox: 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=9dc2fc85-c2c7-4ffd-bd25-cff40abe2f26 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.036473473Z" level=info msg="Removed pod sandbox: 0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" id=9dc2fc85-c2c7-4ffd-bd25-cff40abe2f26 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.036925167Z" level=info msg="Stopping pod sandbox: ebff3968cd5cb2f9229ee79b80fbbb01c2333262021aec0ee902cbe660720bdb" id=d51b7eb6-1e95-4ed3-a442-6634830a2bee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.036965582Z" level=info msg="Stopped pod sandbox (already stopped): ebff3968cd5cb2f9229ee79b80fbbb01c2333262021aec0ee902cbe660720bdb" id=d51b7eb6-1e95-4ed3-a442-6634830a2bee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.037242824Z" level=info msg="Removing pod sandbox: ebff3968cd5cb2f9229ee79b80fbbb01c2333262021aec0ee902cbe660720bdb" id=27b0fbf8-e202-4a57-9a66-31b070d8649f name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.043233553Z" level=info msg="Removed pod sandbox: ebff3968cd5cb2f9229ee79b80fbbb01c2333262021aec0ee902cbe660720bdb" id=27b0fbf8-e202-4a57-9a66-31b070d8649f name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.043626281Z" level=info msg="Stopping pod sandbox: 1d07cb727b9d33cc670878acf4684e6d786591ee4b2d79d64890ce2bfb9792de" id=78fd4e18-4195-42d3-9796-73b0c006e542 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.043669579Z" level=info msg="Stopped pod sandbox (already stopped): 1d07cb727b9d33cc670878acf4684e6d786591ee4b2d79d64890ce2bfb9792de" id=78fd4e18-4195-42d3-9796-73b0c006e542 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.043953649Z" level=info msg="Removing pod sandbox: 1d07cb727b9d33cc670878acf4684e6d786591ee4b2d79d64890ce2bfb9792de" id=5ff3518f-be70-46ee-9ed4-21237dd2df93 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.048827407Z" level=info msg="Removed pod sandbox: 1d07cb727b9d33cc670878acf4684e6d786591ee4b2d79d64890ce2bfb9792de" id=5ff3518f-be70-46ee-9ed4-21237dd2df93 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.049183078Z" level=info msg="Stopping pod sandbox: 6d1ea2e0b2ccac3eec235710b3f7186762a8db2b80f8528bdb76740162083b54" id=c96caaa5-c479-4ee2-92c5-9ea9c42f4999 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.049217543Z" level=info msg="Stopped pod sandbox (already stopped): 6d1ea2e0b2ccac3eec235710b3f7186762a8db2b80f8528bdb76740162083b54" id=c96caaa5-c479-4ee2-92c5-9ea9c42f4999 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.049492047Z" level=info msg="Removing pod sandbox: 6d1ea2e0b2ccac3eec235710b3f7186762a8db2b80f8528bdb76740162083b54" id=eaf13a1f-b227-4884-8f42-4c7ea1348025 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 17 21:24:34 ingress-addon-legacy-997484 crio[952]: time="2023-08-17 21:24:34.055585244Z" level=info msg="Removed pod sandbox: 6d1ea2e0b2ccac3eec235710b3f7186762a8db2b80f8528bdb76740162083b54" id=eaf13a1f-b227-4884-8f42-4c7ea1348025 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	62dd43ed202bb       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea   23 seconds ago      Running             hello-world-app           0                   64d635e81d3d7       hello-world-app-5f5d8b66bb-645dn
	e6becaf2a8a25       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a           2 minutes ago       Running             nginx                     0                   298c33ad3e084       nginx
	5d8e58af3ed9e       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   d4525d547733e       coredns-66bff467f8-5hqff
	cf7703db640af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Running             storage-provisioner       0                   5c54fe9cdb293       storage-provisioner
	845a79ae75737       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974        3 minutes ago       Running             kindnet-cni               0                   8220275178e6c       kindnet-65b86
	93cc11abb808e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                          3 minutes ago       Running             kube-proxy                0                   0052915cf1e16       kube-proxy-vjj9q
	c2f101666d769       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                          4 minutes ago       Running             etcd                      0                   8de3f447b1ae8       etcd-ingress-addon-legacy-997484
	ed2f27bbcbd05       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                          4 minutes ago       Running             kube-controller-manager   0                   3934009fd3032       kube-controller-manager-ingress-addon-legacy-997484
	1284107d39f95       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                          4 minutes ago       Running             kube-scheduler            0                   8aecbb190cccc       kube-scheduler-ingress-addon-legacy-997484
	5f589f9206526       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                          4 minutes ago       Running             kube-apiserver            0                   a618d9268e147       kube-apiserver-ingress-addon-legacy-997484
	
	* 
	* ==> coredns [5d8e58af3ed9e34fb5e4a834684124f949bc0ecf02df90731eaddb2e5b200c61] <==
	* [INFO] 10.244.0.5:60515 - 41466 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005184724s
	[INFO] 10.244.0.5:43448 - 51761 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003980479s
	[INFO] 10.244.0.5:49389 - 47930 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004068627s
	[INFO] 10.244.0.5:58857 - 19184 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003956971s
	[INFO] 10.244.0.5:50524 - 46764 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004422539s
	[INFO] 10.244.0.5:33084 - 54544 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004259513s
	[INFO] 10.244.0.5:40166 - 4779 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004403968s
	[INFO] 10.244.0.5:47121 - 28238 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004439937s
	[INFO] 10.244.0.5:60515 - 16196 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004211084s
	[INFO] 10.244.0.5:58857 - 39619 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007532441s
	[INFO] 10.244.0.5:49389 - 53852 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007672092s
	[INFO] 10.244.0.5:40166 - 8462 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007588819s
	[INFO] 10.244.0.5:60515 - 6291 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007355968s
	[INFO] 10.244.0.5:47121 - 32843 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007612042s
	[INFO] 10.244.0.5:58857 - 61010 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056434s
	[INFO] 10.244.0.5:50524 - 45180 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007700315s
	[INFO] 10.244.0.5:60515 - 10749 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064342s
	[INFO] 10.244.0.5:49389 - 40170 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167683s
	[INFO] 10.244.0.5:43448 - 20016 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008035766s
	[INFO] 10.244.0.5:50524 - 30769 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094921s
	[INFO] 10.244.0.5:33084 - 25865 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.008017382s
	[INFO] 10.244.0.5:47121 - 62771 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000281888s
	[INFO] 10.244.0.5:40166 - 57959 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000425135s
	[INFO] 10.244.0.5:43448 - 18215 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000184297s
	[INFO] 10.244.0.5:33084 - 10675 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075509s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-997484
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-997484
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=ingress-addon-legacy-997484
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_20_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:20:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-997484
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:24:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:24:34 +0000   Thu, 17 Aug 2023 21:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:24:34 +0000   Thu, 17 Aug 2023 21:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:24:34 +0000   Thu, 17 Aug 2023 21:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:24:34 +0000   Thu, 17 Aug 2023 21:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-997484
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4180ab8126f4c9bb02a55016df24706
	  System UUID:                e3bf9423-ed6f-4b1e-9c0b-95b7ec5b5081
	  Boot ID:                    8d1de0dd-e970-4922-97d1-4b473b3fd1c5
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-645dn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-5hqff                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m45s
	  kube-system                 etcd-ingress-addon-legacy-997484                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kindnet-65b86                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-997484             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ingress-addon-legacy-997484    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-vjj9q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-997484             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m8s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x4 over 4m9s)  kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m                   kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m                   kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m                   kubelet     Node ingress-addon-legacy-997484 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m30s                kubelet     Node ingress-addon-legacy-997484 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004939] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006659] FS-Cache: N-cookie d=00000000fedf765e{9p.inode} n=00000000579d86d3
	[  +0.008741] FS-Cache: N-key=[8] '80a00f0200000000'
	[  +0.355249] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006749] FS-Cache: O-cookie d=00000000fedf765e{9p.inode} n=00000000a79b8bcd
	[  +0.007355] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.004965] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007940] FS-Cache: N-cookie d=00000000fedf765e{9p.inode} n=0000000043d232ba
	[  +0.008746] FS-Cache: N-key=[8] '8da00f0200000000'
	[ +21.449794] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug17 21:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +1.024489] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[Aug17 21:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +4.159602] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +8.191196] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[ +16.126446] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[Aug17 21:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	
	* 
	* ==> etcd [c2f101666d7698884d20087fd262d964c2a67f76e753d531e17177630f18714b] <==
	* raft2023/08/17 21:20:27 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/17 21:20:27 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/17 21:20:27 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-17 21:20:27.032151 W | auth: simple token is not cryptographically signed
	2023-08-17 21:20:27.036305 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-17 21:20:27.038011 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-17 21:20:27.038203 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-08-17 21:20:27.038264 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-17 21:20:27.038419 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/08/17 21:20:27 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-17 21:20:27.038571 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/08/17 21:20:28 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/17 21:20:28 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/17 21:20:28 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/17 21:20:28 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/17 21:20:28 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-17 21:20:28.028244 I | embed: ready to serve client requests
	2023-08-17 21:20:28.028432 I | etcdserver: published {Name:ingress-addon-legacy-997484 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-17 21:20:28.028511 I | embed: ready to serve client requests
	2023-08-17 21:20:28.028574 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-17 21:20:28.029357 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-17 21:20:28.029521 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-17 21:20:28.031046 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-17 21:20:28.031191 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-17 21:20:54.348415 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-997484\" " with result "range_response_count:1 size:6604" took too long (172.055267ms) to execute
	
	* 
	* ==> kernel <==
	*  21:24:34 up  1:07,  0 users,  load average: 0.08, 0.62, 0.48
	Linux ingress-addon-legacy-997484 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [845a79ae75737eadc74d9873b4a66c51bf537ca386362320cca469c0abe450ed] <==
	* I0817 21:22:25.073459       1 main.go:227] handling current node
	I0817 21:22:35.077127       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:22:35.077158       1 main.go:227] handling current node
	I0817 21:22:45.088838       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:22:45.088863       1 main.go:227] handling current node
	I0817 21:22:55.092697       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:22:55.092723       1 main.go:227] handling current node
	I0817 21:23:05.104660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:05.104685       1 main.go:227] handling current node
	I0817 21:23:15.107944       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:15.107967       1 main.go:227] handling current node
	I0817 21:23:25.119972       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:25.119993       1 main.go:227] handling current node
	I0817 21:23:35.123029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:35.123053       1 main.go:227] handling current node
	I0817 21:23:45.132854       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:45.132881       1 main.go:227] handling current node
	I0817 21:23:55.137155       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:23:55.137181       1 main.go:227] handling current node
	I0817 21:24:05.148830       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:05.148855       1 main.go:227] handling current node
	I0817 21:24:15.152026       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:15.152048       1 main.go:227] handling current node
	I0817 21:24:25.163973       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0817 21:24:25.163999       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5f589f920652605f14f84fc815f4c211754732856a48cdf1051e03f48300e64a] <==
	* I0817 21:20:30.676423       1 controller.go:81] Starting OpenAPI AggregationController
	E0817 21:20:30.678515       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0817 21:20:30.776580       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0817 21:20:30.781455       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0817 21:20:30.822097       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:20:30.822569       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:20:30.822646       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0817 21:20:31.675680       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0817 21:20:31.675710       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:20:31.680140       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0817 21:20:31.682665       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0817 21:20:31.682683       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0817 21:20:31.959555       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:20:31.985872       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0817 21:20:32.038895       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0817 21:20:32.039662       1 controller.go:609] quota admission added evaluator for: endpoints
	I0817 21:20:32.042439       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 21:20:32.431555       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:20:32.990438       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0817 21:20:33.607634       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0817 21:20:33.774662       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0817 21:20:49.859850       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0817 21:20:50.225805       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0817 21:21:20.570213       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0817 21:21:49.003370       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ed2f27bbcbd054b0bb83178c698d48b55ddc13dd77dd16ab9e62450100965d3f] <==
	* I0817 21:20:50.244691       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0817 21:20:50.244696       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0817 21:20:50.328662       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"ed6c30c6-a9ac-4557-87fd-8c18ef349643", APIVersion:"apps/v1", ResourceVersion:"236", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-65b86
	I0817 21:20:50.333210       1 range_allocator.go:373] Set node ingress-addon-legacy-997484 PodCIDR to [10.244.0.0/24]
	I0817 21:20:50.340582       1 shared_informer.go:230] Caches are synced for ReplicationController 
	E0817 21:20:50.425836       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d6ee7109-9326-4cda-be4f-7009bf8a788a", ResourceVersion:"218", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63827904033, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018abf00), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc0018abf20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018abf40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0015caf00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc0018abf60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018abf80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018abfc0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0019a27d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018192e8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00041ce00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000268758)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001819338)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0817 21:20:50.439334       1 shared_informer.go:230] Caches are synced for disruption 
	I0817 21:20:50.439361       1 disruption.go:339] Sending events to api server.
	I0817 21:20:50.444311       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:20:50.444397       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0817 21:20:50.522098       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 21:20:50.522234       1 shared_informer.go:230] Caches are synced for stateful set 
	I0817 21:20:50.522985       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0817 21:20:50.542839       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0817 21:20:50.542915       1 shared_informer.go:230] Caches are synced for resource quota 
	I0817 21:21:05.240900       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0817 21:21:20.561796       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"e114f167-9330-42d6-a18a-149e6d79a12b", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0817 21:21:20.567820       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"03ecd005-838e-4600-9f5b-9a9f1e773336", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-6bpdm
	I0817 21:21:20.632629       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c0f7179b-1d72-4927-b604-524ba7d002a4", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-zj26f
	I0817 21:21:20.647471       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4aaa1a03-8561-44e0-b389-9469d3c1af54", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-ff4s5
	I0817 21:21:24.094298       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c0f7179b-1d72-4927-b604-524ba7d002a4", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:21:25.097213       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"4aaa1a03-8561-44e0-b389-9469d3c1af54", APIVersion:"batch/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0817 21:24:09.426808       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"129f7000-f2f3-4a00-84bf-472382114d79", APIVersion:"apps/v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0817 21:24:09.431821       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"77a3cf16-cbed-4ea2-9685-b125af91acd7", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-645dn
	
	* 
	* ==> kube-proxy [93cc11abb808e51ef0afec3bd2013f15e41e067a2cc1cfcdd16843d5a2caf1ec] <==
	* W0817 21:20:51.006937       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0817 21:20:51.013120       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0817 21:20:51.013144       1 server_others.go:186] Using iptables Proxier.
	I0817 21:20:51.013386       1 server.go:583] Version: v1.18.20
	I0817 21:20:51.013810       1 config.go:315] Starting service config controller
	I0817 21:20:51.013827       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0817 21:20:51.013845       1 config.go:133] Starting endpoints config controller
	I0817 21:20:51.013855       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0817 21:20:51.114008       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0817 21:20:51.114021       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1284107d39f95af297efa6df207d1e5cc9cdd0e15436610624eb66ccc790461d] <==
	* W0817 21:20:30.723059       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0817 21:20:30.723068       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0817 21:20:30.723074       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0817 21:20:30.742461       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 21:20:30.742484       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0817 21:20:30.744490       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0817 21:20:30.744717       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0817 21:20:30.744890       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0817 21:20:30.744934       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0817 21:20:30.747700       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:20:30.747716       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0817 21:20:30.747857       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:20:30.747948       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:20:30.747951       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:20:30.748038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 21:20:30.748105       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:20:30.748159       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0817 21:20:30.748211       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:20:30.748268       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0817 21:20:30.748326       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:20:30.822214       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:20:31.670688       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0817 21:20:31.800801       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0817 21:20:31.827052       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0817 21:20:34.145201       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Aug 17 21:24:25 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:25.070101    1867 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/7294ff19-3715-4b2d-92db-ac906a8d62d1-minikube-ingress-dns-token-rbjp6 podName:7294ff19-3715-4b2d-92db-ac906a8d62d1 nodeName:}" failed. No retries permitted until 2023-08-17 21:24:25.570078889 +0000 UTC m=+231.994649268 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-rbjp6\" (UniqueName: \"kubernetes.io/secret/7294ff19-3715-4b2d-92db-ac906a8d62d1-minikube-ingress-dns-token-rbjp6\") pod \"kube-ingress-dns-minikube\" (UID: \"7294ff19-3715-4b2d-92db-ac906a8d62d1\") : secret \"minikube-ingress-dns-token-rbjp6\" not found"
	Aug 17 21:24:25 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:25.170188    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-rbjp6" (UniqueName: "kubernetes.io/secret/7294ff19-3715-4b2d-92db-ac906a8d62d1-minikube-ingress-dns-token-rbjp6") pod "7294ff19-3715-4b2d-92db-ac906a8d62d1" (UID: "7294ff19-3715-4b2d-92db-ac906a8d62d1")
	Aug 17 21:24:25 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:25.171976    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7294ff19-3715-4b2d-92db-ac906a8d62d1-minikube-ingress-dns-token-rbjp6" (OuterVolumeSpecName: "minikube-ingress-dns-token-rbjp6") pod "7294ff19-3715-4b2d-92db-ac906a8d62d1" (UID: "7294ff19-3715-4b2d-92db-ac906a8d62d1"). InnerVolumeSpecName "minikube-ingress-dns-token-rbjp6". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:25 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:25.270486    1867 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-rbjp6" (UniqueName: "kubernetes.io/secret/7294ff19-3715-4b2d-92db-ac906a8d62d1-minikube-ingress-dns-token-rbjp6") on node "ingress-addon-legacy-997484" DevicePath ""
	Aug 17 21:24:27 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:27.144088    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6bpdm.177c48acd5c851ae", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6bpdm", UID:"c36461f9-677d-4882-be6a-284d8fa3da4f", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-997484"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e2c881a3ae, ext:233567284150, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e2c881a3ae, ext:233567284150, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6bpdm.177c48acd5c851ae" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:24:27 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:27.148112    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6bpdm.177c48acd5c851ae", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6bpdm", UID:"c36461f9-677d-4882-be6a-284d8fa3da4f", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-997484"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e2c881a3ae, ext:233567284150, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12fc1e2c8a6a122, ext:233569708323, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6bpdm.177c48acd5c851ae" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 17 21:24:29 ingress-addon-legacy-997484 kubelet[1867]: W0817 21:24:29.387317    1867 pod_container_deletor.go:77] Container "0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7" not found in pod's containers
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.331526    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-webhook-cert") pod "c36461f9-677d-4882-be6a-284d8fa3da4f" (UID: "c36461f9-677d-4882-be6a-284d8fa3da4f")
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.331592    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-vw7pm" (UniqueName: "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-ingress-nginx-token-vw7pm") pod "c36461f9-677d-4882-be6a-284d8fa3da4f" (UID: "c36461f9-677d-4882-be6a-284d8fa3da4f")
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.333473    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c36461f9-677d-4882-be6a-284d8fa3da4f" (UID: "c36461f9-677d-4882-be6a-284d8fa3da4f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.333700    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-ingress-nginx-token-vw7pm" (OuterVolumeSpecName: "ingress-nginx-token-vw7pm") pod "c36461f9-677d-4882-be6a-284d8fa3da4f" (UID: "c36461f9-677d-4882-be6a-284d8fa3da4f"). InnerVolumeSpecName "ingress-nginx-token-vw7pm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.431882    1867 reconciler.go:319] Volume detached for volume "ingress-nginx-token-vw7pm" (UniqueName: "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-ingress-nginx-token-vw7pm") on node "ingress-addon-legacy-997484" DevicePath ""
	Aug 17 21:24:31 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:31.431918    1867 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c36461f9-677d-4882-be6a-284d8fa3da4f-webhook-cert") on node "ingress-addon-legacy-997484" DevicePath ""
	Aug 17 21:24:33 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:33.951773    1867 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 41ee30fab766646c707bcff84718940dd50e24300fdeb3d3bad04294368dfa25
	Aug 17 21:24:33 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:33.967326    1867 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3090cf900f7bda3862bc233bd57a26f973144860724c43ec677f4d4c68c0183a
	Aug 17 21:24:33 ingress-addon-legacy-997484 kubelet[1867]: I0817 21:24:33.982544    1867 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 864d6fb59804ddb6a6b300d154332ac7678bfc632645241de37c74cb5461f70f
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.058347    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/973a95a18c26381e19535f9dab05a070689a0e7bef525beae5fced5b50687b08/diff" to get inode usage: stat /var/lib/containers/storage/overlay/973a95a18c26381e19535f9dab05a070689a0e7bef525beae5fced5b50687b08/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.059451    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.061132    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.062911    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bf64e6007887a478020271d46b534a7aa0919307bd6e9c29ef34646224df08ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bf64e6007887a478020271d46b534a7aa0919307bd6e9c29ef34646224df08ea/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.064663    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/40f281118c0f74754b8a99c9db6ce1a47cd19431262e1b923b0e5bbe55fc9ae4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/40f281118c0f74754b8a99c9db6ce1a47cd19431262e1b923b0e5bbe55fc9ae4/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.068573    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/973a95a18c26381e19535f9dab05a070689a0e7bef525beae5fced5b50687b08/diff" to get inode usage: stat /var/lib/containers/storage/overlay/973a95a18c26381e19535f9dab05a070689a0e7bef525beae5fced5b50687b08/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.076000    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bf64e6007887a478020271d46b534a7aa0919307bd6e9c29ef34646224df08ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bf64e6007887a478020271d46b534a7aa0919307bd6e9c29ef34646224df08ea/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: E0817 21:24:34.076952    1867 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/40f281118c0f74754b8a99c9db6ce1a47cd19431262e1b923b0e5bbe55fc9ae4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/40f281118c0f74754b8a99c9db6ce1a47cd19431262e1b923b0e5bbe55fc9ae4/diff: no such file or directory, extraDiskErr: <nil>
	Aug 17 21:24:34 ingress-addon-legacy-997484 kubelet[1867]: W0817 21:24:34.371903    1867 container.go:526] Failed to update stats for container "/docker/de7d7df359b2008f96efcbc2c960754a3ef93880687ff7cfbec215e5bc0b0264/crio-0796bd2f7e69e4a860330c18b291dcbbc3380698b698f04e2aaaeb2d6bf4dab7": unable to determine device info for dir: /var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff: stat failed on /var/lib/containers/storage/overlay/6250d38fa4972bf027d052086efca6b6bebabf0b43cdb946670d6cf1e304e63f/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [cf7703db640afb6afd148e75986d7fbcb505d8f5863077dec49742fc372ce9f2] <==
	* I0817 21:21:09.223643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0817 21:21:09.232138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0817 21:21:09.232190       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0817 21:21:09.238595       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0817 21:21:09.238725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-997484_a9cb023a-1fb2-4a0c-802b-13325a7b4e40!
	I0817 21:21:09.239812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7b6b8092-6500-4364-a588-15a626119f77", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-997484_a9cb023a-1fb2-4a0c-802b-13325a7b4e40 became leader
	I0817 21:21:09.339808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-997484_a9cb023a-1fb2-4a0c-802b-13325a7b4e40!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-997484 -n ingress-addon-legacy-997484
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-997484 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- sh -c "ping -c 1 192.168.58.1": exit status 1 (164.889478ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-b9qpl): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- sh -c "ping -c 1 192.168.58.1": exit status 1 (159.366807ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-khspl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-938028
helpers_test.go:235: (dbg) docker inspect multinode-938028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595",
	        "Created": "2023-08-17T21:29:40.530872941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:29:40.801738354Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/hosts",
	        "LogPath": "/var/lib/docker/containers/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595-json.log",
	        "Name": "/multinode-938028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-938028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-938028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2c4033df03ed3e19ee88034953b2f8bc2c324542efe923d9403e45d480cda35-init/diff:/var/lib/docker/overlay2/4fa4181e3bc5ec3351265343644d26aad7e77680fc05db63fc4bb2710b90d29d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2c4033df03ed3e19ee88034953b2f8bc2c324542efe923d9403e45d480cda35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2c4033df03ed3e19ee88034953b2f8bc2c324542efe923d9403e45d480cda35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2c4033df03ed3e19ee88034953b2f8bc2c324542efe923d9403e45d480cda35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-938028",
	                "Source": "/var/lib/docker/volumes/multinode-938028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-938028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-938028",
	                "name.minikube.sigs.k8s.io": "multinode-938028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ac621fea07c3581d0bfaba46fd85b2534ff6ea8ae8dca5d02f1921a6ca19d60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1ac621fea07c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-938028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ae5510f223c",
	                        "multinode-938028"
	                    ],
	                    "NetworkID": "093b178ab28fa6208d6cdedc755aaa528f230e28576eee1b026e95f3041da28c",
	                    "EndpointID": "9719e2ec204eebf378ea7c73d6593b16f89e89a9b580fd24ec9a3057296922df",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-938028 -n multinode-938028
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-938028 logs -n 25: (1.147047277s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-823424                           | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-823424 ssh -- ls                    | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-810049                           | mount-start-1-810049 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-823424 ssh -- ls                    | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-823424                           | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	| start   | -p mount-start-2-823424                           | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	| ssh     | mount-start-2-823424 ssh -- ls                    | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-823424                           | mount-start-2-823424 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	| delete  | -p mount-start-1-810049                           | mount-start-1-810049 | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:29 UTC |
	| start   | -p multinode-938028                               | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:29 UTC | 17 Aug 23 21:30 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- apply -f                   | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- rollout                    | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- get pods -o                | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- get pods -o                | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-b9qpl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-khspl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-b9qpl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-khspl --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-b9qpl -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-khspl -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- get pods -o                | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-b9qpl                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC |                     |
	|         | busybox-67b7f59bb-b9qpl -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC | 17 Aug 23 21:30 UTC |
	|         | busybox-67b7f59bb-khspl                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-938028 -- exec                       | multinode-938028     | jenkins | v1.31.2 | 17 Aug 23 21:30 UTC |                     |
	|         | busybox-67b7f59bb-khspl -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:29:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:29:34.851181  102247 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:29:34.851297  102247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:29:34.851306  102247 out.go:309] Setting ErrFile to fd 2...
	I0817 21:29:34.851311  102247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:29:34.851499  102247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:29:34.852055  102247 out.go:303] Setting JSON to false
	I0817 21:29:34.853204  102247 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4323,"bootTime":1692303452,"procs":619,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:29:34.853267  102247 start.go:138] virtualization: kvm guest
	I0817 21:29:34.855860  102247 out.go:177] * [multinode-938028] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:29:34.857411  102247 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:29:34.858941  102247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:29:34.857451  102247 notify.go:220] Checking for updates...
	I0817 21:29:34.862043  102247 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:29:34.863591  102247 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:29:34.865782  102247 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:29:34.867363  102247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:29:34.868996  102247 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:29:34.891061  102247 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:29:34.891149  102247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:29:34.945326  102247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-17 21:29:34.936565093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:29:34.945414  102247 docker.go:294] overlay module found
	I0817 21:29:34.947276  102247 out.go:177] * Using the docker driver based on user configuration
	I0817 21:29:34.948774  102247 start.go:298] selected driver: docker
	I0817 21:29:34.948784  102247 start.go:902] validating driver "docker" against <nil>
	I0817 21:29:34.948794  102247 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:29:34.949525  102247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:29:35.000619  102247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-17 21:29:34.992862888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:29:35.000853  102247 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:29:35.001036  102247 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0817 21:29:35.003023  102247 out.go:177] * Using Docker driver with root privileges
	I0817 21:29:35.004261  102247 cni.go:84] Creating CNI manager for ""
	I0817 21:29:35.004270  102247 cni.go:136] 0 nodes found, recommending kindnet
	I0817 21:29:35.004280  102247 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:29:35.004290  102247 start_flags.go:319] config:
	{Name:multinode-938028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:29:35.005827  102247 out.go:177] * Starting control plane node multinode-938028 in cluster multinode-938028
	I0817 21:29:35.007122  102247 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:29:35.008492  102247 out.go:177] * Pulling base image ...
	I0817 21:29:35.009836  102247 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:29:35.009877  102247 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:29:35.009886  102247 cache.go:57] Caching tarball of preloaded images
	I0817 21:29:35.009943  102247 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:29:35.009990  102247 preload.go:174] Found /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:29:35.010002  102247 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:29:35.010277  102247 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json ...
	I0817 21:29:35.010295  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json: {Name:mk37a615de19a2d8f4d81e07da8bd22386e0771f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:35.025868  102247 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:29:35.025888  102247 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:29:35.025920  102247 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:29:35.025950  102247 start.go:365] acquiring machines lock for multinode-938028: {Name:mk814ba5b13f1c172e3993ba0b719ea067193e6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:29:35.026046  102247 start.go:369] acquired machines lock for "multinode-938028" in 74.143µs
	I0817 21:29:35.026073  102247 start.go:93] Provisioning new machine with config: &{Name:multinode-938028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:29:35.026191  102247 start.go:125] createHost starting for "" (driver="docker")
	I0817 21:29:35.028159  102247 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0817 21:29:35.028410  102247 start.go:159] libmachine.API.Create for "multinode-938028" (driver="docker")
	I0817 21:29:35.028440  102247 client.go:168] LocalClient.Create starting
	I0817 21:29:35.028538  102247 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem
	I0817 21:29:35.028580  102247 main.go:141] libmachine: Decoding PEM data...
	I0817 21:29:35.028603  102247 main.go:141] libmachine: Parsing certificate...
	I0817 21:29:35.028688  102247 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem
	I0817 21:29:35.028723  102247 main.go:141] libmachine: Decoding PEM data...
	I0817 21:29:35.028741  102247 main.go:141] libmachine: Parsing certificate...
	I0817 21:29:35.029082  102247 cli_runner.go:164] Run: docker network inspect multinode-938028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0817 21:29:35.044540  102247 cli_runner.go:211] docker network inspect multinode-938028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0817 21:29:35.044597  102247 network_create.go:281] running [docker network inspect multinode-938028] to gather additional debugging logs...
	I0817 21:29:35.044615  102247 cli_runner.go:164] Run: docker network inspect multinode-938028
	W0817 21:29:35.059345  102247 cli_runner.go:211] docker network inspect multinode-938028 returned with exit code 1
	I0817 21:29:35.059371  102247 network_create.go:284] error running [docker network inspect multinode-938028]: docker network inspect multinode-938028: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-938028 not found
	I0817 21:29:35.059381  102247 network_create.go:286] output of [docker network inspect multinode-938028]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-938028 not found
	
	** /stderr **
	I0817 21:29:35.059433  102247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:29:35.075117  102247 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fed2b9ca4bf2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ef:4f:c6:36} reservation:<nil>}
	I0817 21:29:35.075567  102247 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00142fa20}
	I0817 21:29:35.075586  102247 network_create.go:123] attempt to create docker network multinode-938028 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0817 21:29:35.075624  102247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-938028 multinode-938028
	I0817 21:29:35.126135  102247 network_create.go:107] docker network multinode-938028 192.168.58.0/24 created
	I0817 21:29:35.126163  102247 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-938028" container
	I0817 21:29:35.126223  102247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:29:35.142084  102247 cli_runner.go:164] Run: docker volume create multinode-938028 --label name.minikube.sigs.k8s.io=multinode-938028 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:29:35.158000  102247 oci.go:103] Successfully created a docker volume multinode-938028
	I0817 21:29:35.158066  102247 cli_runner.go:164] Run: docker run --rm --name multinode-938028-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-938028 --entrypoint /usr/bin/test -v multinode-938028:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:29:35.645077  102247 oci.go:107] Successfully prepared a docker volume multinode-938028
	I0817 21:29:35.645111  102247 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:29:35.645130  102247 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:29:35.645206  102247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-938028:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:29:40.465836  102247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-938028:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.820570361s)
	I0817 21:29:40.465874  102247 kic.go:199] duration metric: took 4.820738 seconds to extract preloaded images to volume
	W0817 21:29:40.466054  102247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:29:40.466177  102247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:29:40.516777  102247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-938028 --name multinode-938028 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-938028 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-938028 --network multinode-938028 --ip 192.168.58.2 --volume multinode-938028:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:29:40.809779  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Running}}
	I0817 21:29:40.827619  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:29:40.845025  102247 cli_runner.go:164] Run: docker exec multinode-938028 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:29:40.893060  102247 oci.go:144] the created container "multinode-938028" has a running status.
	I0817 21:29:40.893093  102247 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa...
	I0817 21:29:41.109797  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0817 21:29:41.109842  102247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:29:41.129786  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:29:41.151554  102247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:29:41.151588  102247 kic_runner.go:114] Args: [docker exec --privileged multinode-938028 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:29:41.235813  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:29:41.254426  102247 machine.go:88] provisioning docker machine ...
	I0817 21:29:41.254472  102247 ubuntu.go:169] provisioning hostname "multinode-938028"
	I0817 21:29:41.254544  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:41.282049  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:29:41.282512  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0817 21:29:41.282529  102247 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-938028 && echo "multinode-938028" | sudo tee /etc/hostname
	I0817 21:29:41.479619  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-938028
	
	I0817 21:29:41.479694  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:41.496529  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:29:41.496977  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0817 21:29:41.496998  102247 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-938028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-938028/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-938028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:29:41.621862  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:29:41.621888  102247 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:29:41.621933  102247 ubuntu.go:177] setting up certificates
	I0817 21:29:41.621944  102247 provision.go:83] configureAuth start
	I0817 21:29:41.621998  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028
	I0817 21:29:41.637705  102247 provision.go:138] copyHostCerts
	I0817 21:29:41.637744  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:29:41.637804  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem, removing ...
	I0817 21:29:41.637816  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:29:41.637880  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:29:41.637981  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:29:41.638002  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem, removing ...
	I0817 21:29:41.638009  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:29:41.638034  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:29:41.638079  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:29:41.638095  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem, removing ...
	I0817 21:29:41.638102  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:29:41.638126  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:29:41.638169  102247 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.multinode-938028 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-938028]
	I0817 21:29:41.734310  102247 provision.go:172] copyRemoteCerts
	I0817 21:29:41.734371  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:29:41.734405  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:41.750412  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:29:41.841876  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:29:41.841948  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:29:41.862394  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:29:41.862450  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0817 21:29:41.882415  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:29:41.882463  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:29:41.901852  102247 provision.go:86] duration metric: configureAuth took 279.896272ms
	I0817 21:29:41.901874  102247 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:29:41.902068  102247 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:29:41.902172  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:41.918542  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:29:41.918919  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0817 21:29:41.918936  102247 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:29:42.122056  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:29:42.122084  102247 machine.go:91] provisioned docker machine in 867.627807ms
	I0817 21:29:42.122093  102247 client.go:171] LocalClient.Create took 7.093645208s
	I0817 21:29:42.122111  102247 start.go:167] duration metric: libmachine.API.Create for "multinode-938028" took 7.093702184s
	I0817 21:29:42.122117  102247 start.go:300] post-start starting for "multinode-938028" (driver="docker")
	I0817 21:29:42.122125  102247 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:29:42.122179  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:29:42.122217  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:42.139080  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:29:42.230148  102247 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:29:42.232950  102247 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0817 21:29:42.232968  102247 command_runner.go:130] > NAME="Ubuntu"
	I0817 21:29:42.232976  102247 command_runner.go:130] > VERSION_ID="22.04"
	I0817 21:29:42.232985  102247 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0817 21:29:42.233005  102247 command_runner.go:130] > VERSION_CODENAME=jammy
	I0817 21:29:42.233017  102247 command_runner.go:130] > ID=ubuntu
	I0817 21:29:42.233024  102247 command_runner.go:130] > ID_LIKE=debian
	I0817 21:29:42.233030  102247 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0817 21:29:42.233034  102247 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0817 21:29:42.233042  102247 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0817 21:29:42.233052  102247 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0817 21:29:42.233058  102247 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0817 21:29:42.233104  102247 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:29:42.233126  102247 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:29:42.233134  102247 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:29:42.233142  102247 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:29:42.233149  102247 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:29:42.233203  102247 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:29:42.233270  102247 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> 175042.pem in /etc/ssl/certs
	I0817 21:29:42.233279  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /etc/ssl/certs/175042.pem
	I0817 21:29:42.233353  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:29:42.240552  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:29:42.260914  102247 start.go:303] post-start completed in 138.784344ms
	I0817 21:29:42.261228  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028
	I0817 21:29:42.276503  102247 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json ...
	I0817 21:29:42.276710  102247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:29:42.276751  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:42.291525  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:29:42.378602  102247 command_runner.go:130] > 19%!
	(MISSING)I0817 21:29:42.378676  102247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:29:42.382536  102247 command_runner.go:130] > 238G
	I0817 21:29:42.382557  102247 start.go:128] duration metric: createHost completed in 7.356353705s
	I0817 21:29:42.382568  102247 start.go:83] releasing machines lock for "multinode-938028", held for 7.356509849s
	I0817 21:29:42.382631  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028
	I0817 21:29:42.398079  102247 ssh_runner.go:195] Run: cat /version.json
	I0817 21:29:42.398126  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:42.398135  102247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:29:42.398186  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:29:42.413872  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:29:42.415677  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:29:42.497038  102247 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0817 21:29:42.497179  102247 ssh_runner.go:195] Run: systemctl --version
	I0817 21:29:42.586391  102247 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:29:42.588286  102247 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0817 21:29:42.588315  102247 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0817 21:29:42.588381  102247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:29:42.723975  102247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:29:42.727697  102247 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0817 21:29:42.727725  102247 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0817 21:29:42.727736  102247 command_runner.go:130] > Device: 33h/51d	Inode: 834778      Links: 1
	I0817 21:29:42.727747  102247 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:29:42.727761  102247 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0817 21:29:42.727775  102247 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0817 21:29:42.727784  102247 command_runner.go:130] > Change: 2023-08-17 21:10:53.572447170 +0000
	I0817 21:29:42.727795  102247 command_runner.go:130] >  Birth: 2023-08-17 21:10:53.572447170 +0000
	I0817 21:29:42.727884  102247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:29:42.745490  102247 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:29:42.745564  102247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:29:42.769886  102247 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0817 21:29:42.769992  102247 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:29:42.770006  102247 start.go:466] detecting cgroup driver to use...
	I0817 21:29:42.770034  102247 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:29:42.770075  102247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:29:42.782640  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:29:42.791741  102247 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:29:42.791790  102247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:29:42.802967  102247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:29:42.814530  102247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:29:42.890480  102247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:29:42.902801  102247 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0817 21:29:42.969889  102247 docker.go:212] disabling docker service ...
	I0817 21:29:42.969972  102247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:29:42.986534  102247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:29:42.996142  102247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:29:43.006540  102247 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0817 21:29:43.061677  102247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:29:43.071607  102247 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0817 21:29:43.138748  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:29:43.148610  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:29:43.161940  102247 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:29:43.161976  102247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:29:43.162018  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:29:43.169980  102247 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:29:43.170026  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:29:43.178034  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:29:43.185860  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:29:43.193743  102247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:29:43.201179  102247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:29:43.207282  102247 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0817 21:29:43.207866  102247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:29:43.214839  102247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:29:43.284901  102247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:29:43.399343  102247 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:29:43.399407  102247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:29:43.402555  102247 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:29:43.402578  102247 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:29:43.402589  102247 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0817 21:29:43.402601  102247 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:29:43.402613  102247 command_runner.go:130] > Access: 2023-08-17 21:29:43.388848501 +0000
	I0817 21:29:43.402624  102247 command_runner.go:130] > Modify: 2023-08-17 21:29:43.388848501 +0000
	I0817 21:29:43.402632  102247 command_runner.go:130] > Change: 2023-08-17 21:29:43.388848501 +0000
	I0817 21:29:43.402639  102247 command_runner.go:130] >  Birth: -
	I0817 21:29:43.402654  102247 start.go:534] Will wait 60s for crictl version
	I0817 21:29:43.402699  102247 ssh_runner.go:195] Run: which crictl
	I0817 21:29:43.405422  102247 command_runner.go:130] > /usr/bin/crictl
	I0817 21:29:43.405516  102247 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:29:43.433647  102247 command_runner.go:130] > Version:  0.1.0
	I0817 21:29:43.433671  102247 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:29:43.433678  102247 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0817 21:29:43.433687  102247 command_runner.go:130] > RuntimeApiVersion:  v1
	I0817 21:29:43.435633  102247 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0817 21:29:43.435699  102247 ssh_runner.go:195] Run: crio --version
	I0817 21:29:43.466560  102247 command_runner.go:130] > crio version 1.24.6
	I0817 21:29:43.466582  102247 command_runner.go:130] > Version:          1.24.6
	I0817 21:29:43.466593  102247 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0817 21:29:43.466601  102247 command_runner.go:130] > GitTreeState:     clean
	I0817 21:29:43.466611  102247 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0817 21:29:43.466619  102247 command_runner.go:130] > GoVersion:        go1.18.2
	I0817 21:29:43.466625  102247 command_runner.go:130] > Compiler:         gc
	I0817 21:29:43.466630  102247 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:29:43.466635  102247 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:29:43.466644  102247 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:29:43.466651  102247 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:29:43.466655  102247 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:29:43.467780  102247 ssh_runner.go:195] Run: crio --version
	I0817 21:29:43.497814  102247 command_runner.go:130] > crio version 1.24.6
	I0817 21:29:43.497836  102247 command_runner.go:130] > Version:          1.24.6
	I0817 21:29:43.497848  102247 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0817 21:29:43.497855  102247 command_runner.go:130] > GitTreeState:     clean
	I0817 21:29:43.497865  102247 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0817 21:29:43.497872  102247 command_runner.go:130] > GoVersion:        go1.18.2
	I0817 21:29:43.497879  102247 command_runner.go:130] > Compiler:         gc
	I0817 21:29:43.497887  102247 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:29:43.497935  102247 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:29:43.497952  102247 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:29:43.497960  102247 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:29:43.497970  102247 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:29:43.501166  102247 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0817 21:29:43.502621  102247 cli_runner.go:164] Run: docker network inspect multinode-938028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:29:43.517946  102247 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0817 21:29:43.521154  102247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:29:43.530442  102247 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:29:43.530485  102247 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:29:43.576173  102247 command_runner.go:130] > {
	I0817 21:29:43.576197  102247 command_runner.go:130] >   "images": [
	I0817 21:29:43.576204  102247 command_runner.go:130] >     {
	I0817 21:29:43.576215  102247 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0817 21:29:43.576221  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576227  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0817 21:29:43.576230  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576234  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576242  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0817 21:29:43.576249  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0817 21:29:43.576254  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576261  102247 command_runner.go:130] >       "size": "65249302",
	I0817 21:29:43.576272  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.576282  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576295  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576332  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576340  102247 command_runner.go:130] >     },
	I0817 21:29:43.576343  102247 command_runner.go:130] >     {
	I0817 21:29:43.576349  102247 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0817 21:29:43.576353  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576361  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:29:43.576364  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576371  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576378  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0817 21:29:43.576387  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0817 21:29:43.576394  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576400  102247 command_runner.go:130] >       "size": "31470524",
	I0817 21:29:43.576407  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.576412  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576419  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576424  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576429  102247 command_runner.go:130] >     },
	I0817 21:29:43.576433  102247 command_runner.go:130] >     {
	I0817 21:29:43.576441  102247 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0817 21:29:43.576445  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576452  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0817 21:29:43.576459  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576463  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576472  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0817 21:29:43.576484  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0817 21:29:43.576490  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576495  102247 command_runner.go:130] >       "size": "53621675",
	I0817 21:29:43.576501  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.576505  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576511  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576515  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576521  102247 command_runner.go:130] >     },
	I0817 21:29:43.576525  102247 command_runner.go:130] >     {
	I0817 21:29:43.576534  102247 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0817 21:29:43.576538  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576544  102247 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0817 21:29:43.576549  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576554  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576563  102247 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0817 21:29:43.576572  102247 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0817 21:29:43.576579  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576586  102247 command_runner.go:130] >       "size": "297083935",
	I0817 21:29:43.576590  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.576596  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.576600  102247 command_runner.go:130] >       },
	I0817 21:29:43.576606  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576611  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576617  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576621  102247 command_runner.go:130] >     },
	I0817 21:29:43.576628  102247 command_runner.go:130] >     {
	I0817 21:29:43.576634  102247 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0817 21:29:43.576641  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576646  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0817 21:29:43.576652  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576656  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576666  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0817 21:29:43.576675  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0817 21:29:43.576681  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576685  102247 command_runner.go:130] >       "size": "122078160",
	I0817 21:29:43.576691  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.576695  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.576702  102247 command_runner.go:130] >       },
	I0817 21:29:43.576706  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576712  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576717  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576722  102247 command_runner.go:130] >     },
	I0817 21:29:43.576726  102247 command_runner.go:130] >     {
	I0817 21:29:43.576734  102247 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0817 21:29:43.576741  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576746  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0817 21:29:43.576752  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576756  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576766  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0817 21:29:43.576775  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0817 21:29:43.576781  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576786  102247 command_runner.go:130] >       "size": "113931062",
	I0817 21:29:43.576791  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.576795  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.576802  102247 command_runner.go:130] >       },
	I0817 21:29:43.576806  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576812  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576817  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576822  102247 command_runner.go:130] >     },
	I0817 21:29:43.576826  102247 command_runner.go:130] >     {
	I0817 21:29:43.576834  102247 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0817 21:29:43.576841  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576846  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0817 21:29:43.576849  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576855  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576862  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0817 21:29:43.576871  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0817 21:29:43.576877  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576882  102247 command_runner.go:130] >       "size": "72714135",
	I0817 21:29:43.576888  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.576892  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.576896  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.576902  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.576916  102247 command_runner.go:130] >     },
	I0817 21:29:43.576920  102247 command_runner.go:130] >     {
	I0817 21:29:43.576928  102247 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0817 21:29:43.576934  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.576939  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0817 21:29:43.576945  102247 command_runner.go:130] >       ],
	I0817 21:29:43.576950  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.576988  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0817 21:29:43.577001  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0817 21:29:43.577007  102247 command_runner.go:130] >       ],
	I0817 21:29:43.577013  102247 command_runner.go:130] >       "size": "59814710",
	I0817 21:29:43.577023  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.577030  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.577039  102247 command_runner.go:130] >       },
	I0817 21:29:43.577045  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.577053  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.577062  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.577070  102247 command_runner.go:130] >     },
	I0817 21:29:43.577084  102247 command_runner.go:130] >     {
	I0817 21:29:43.577094  102247 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0817 21:29:43.577101  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.577109  102247 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0817 21:29:43.577115  102247 command_runner.go:130] >       ],
	I0817 21:29:43.577121  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.577130  102247 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0817 21:29:43.577139  102247 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0817 21:29:43.577145  102247 command_runner.go:130] >       ],
	I0817 21:29:43.577149  102247 command_runner.go:130] >       "size": "750414",
	I0817 21:29:43.577155  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.577159  102247 command_runner.go:130] >         "value": "65535"
	I0817 21:29:43.577166  102247 command_runner.go:130] >       },
	I0817 21:29:43.577170  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.577174  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.577178  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.577182  102247 command_runner.go:130] >     }
	I0817 21:29:43.577186  102247 command_runner.go:130] >   ]
	I0817 21:29:43.577192  102247 command_runner.go:130] > }
	I0817 21:29:43.578149  102247 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:29:43.578168  102247 crio.go:415] Images already preloaded, skipping extraction
	I0817 21:29:43.578208  102247 ssh_runner.go:195] Run: sudo crictl images --output json
	I0817 21:29:43.607013  102247 command_runner.go:130] > {
	I0817 21:29:43.607031  102247 command_runner.go:130] >   "images": [
	I0817 21:29:43.607035  102247 command_runner.go:130] >     {
	I0817 21:29:43.607043  102247 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0817 21:29:43.607048  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607054  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0817 21:29:43.607057  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607064  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607073  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0817 21:29:43.607079  102247 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0817 21:29:43.607083  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607088  102247 command_runner.go:130] >       "size": "65249302",
	I0817 21:29:43.607091  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.607095  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607101  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607106  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607109  102247 command_runner.go:130] >     },
	I0817 21:29:43.607113  102247 command_runner.go:130] >     {
	I0817 21:29:43.607119  102247 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0817 21:29:43.607126  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607131  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0817 21:29:43.607137  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607141  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607148  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0817 21:29:43.607155  102247 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0817 21:29:43.607159  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607188  102247 command_runner.go:130] >       "size": "31470524",
	I0817 21:29:43.607193  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.607199  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607204  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607211  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607214  102247 command_runner.go:130] >     },
	I0817 21:29:43.607220  102247 command_runner.go:130] >     {
	I0817 21:29:43.607228  102247 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0817 21:29:43.607233  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607243  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0817 21:29:43.607247  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607251  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607260  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0817 21:29:43.607269  102247 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0817 21:29:43.607273  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607277  102247 command_runner.go:130] >       "size": "53621675",
	I0817 21:29:43.607281  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.607286  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607290  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607294  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607299  102247 command_runner.go:130] >     },
	I0817 21:29:43.607303  102247 command_runner.go:130] >     {
	I0817 21:29:43.607311  102247 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0817 21:29:43.607315  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607321  102247 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0817 21:29:43.607325  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607329  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607338  102247 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0817 21:29:43.607345  102247 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0817 21:29:43.607354  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607358  102247 command_runner.go:130] >       "size": "297083935",
	I0817 21:29:43.607365  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.607369  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.607372  102247 command_runner.go:130] >       },
	I0817 21:29:43.607376  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607382  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607386  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607392  102247 command_runner.go:130] >     },
	I0817 21:29:43.607403  102247 command_runner.go:130] >     {
	I0817 21:29:43.607409  102247 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0817 21:29:43.607415  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607420  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0817 21:29:43.607425  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607430  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607439  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0817 21:29:43.607448  102247 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0817 21:29:43.607451  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607457  102247 command_runner.go:130] >       "size": "122078160",
	I0817 21:29:43.607463  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.607467  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.607472  102247 command_runner.go:130] >       },
	I0817 21:29:43.607477  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607483  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607488  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607493  102247 command_runner.go:130] >     },
	I0817 21:29:43.607497  102247 command_runner.go:130] >     {
	I0817 21:29:43.607503  102247 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0817 21:29:43.607509  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607514  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0817 21:29:43.607520  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607524  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607533  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0817 21:29:43.607543  102247 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0817 21:29:43.607546  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607551  102247 command_runner.go:130] >       "size": "113931062",
	I0817 21:29:43.607554  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.607558  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.607564  102247 command_runner.go:130] >       },
	I0817 21:29:43.607568  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607574  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607578  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607581  102247 command_runner.go:130] >     },
	I0817 21:29:43.607585  102247 command_runner.go:130] >     {
	I0817 21:29:43.607591  102247 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0817 21:29:43.607597  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607602  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0817 21:29:43.607605  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607609  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607616  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0817 21:29:43.607625  102247 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0817 21:29:43.607628  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607632  102247 command_runner.go:130] >       "size": "72714135",
	I0817 21:29:43.607639  102247 command_runner.go:130] >       "uid": null,
	I0817 21:29:43.607642  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607646  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607650  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607656  102247 command_runner.go:130] >     },
	I0817 21:29:43.607659  102247 command_runner.go:130] >     {
	I0817 21:29:43.607665  102247 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0817 21:29:43.607672  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607677  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0817 21:29:43.607681  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607685  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607722  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0817 21:29:43.607732  102247 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0817 21:29:43.607736  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607740  102247 command_runner.go:130] >       "size": "59814710",
	I0817 21:29:43.607744  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.607748  102247 command_runner.go:130] >         "value": "0"
	I0817 21:29:43.607754  102247 command_runner.go:130] >       },
	I0817 21:29:43.607758  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607766  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607772  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607780  102247 command_runner.go:130] >     },
	I0817 21:29:43.607786  102247 command_runner.go:130] >     {
	I0817 21:29:43.607796  102247 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0817 21:29:43.607800  102247 command_runner.go:130] >       "repoTags": [
	I0817 21:29:43.607807  102247 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0817 21:29:43.607811  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607815  102247 command_runner.go:130] >       "repoDigests": [
	I0817 21:29:43.607822  102247 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0817 21:29:43.607831  102247 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0817 21:29:43.607834  102247 command_runner.go:130] >       ],
	I0817 21:29:43.607839  102247 command_runner.go:130] >       "size": "750414",
	I0817 21:29:43.607845  102247 command_runner.go:130] >       "uid": {
	I0817 21:29:43.607849  102247 command_runner.go:130] >         "value": "65535"
	I0817 21:29:43.607852  102247 command_runner.go:130] >       },
	I0817 21:29:43.607856  102247 command_runner.go:130] >       "username": "",
	I0817 21:29:43.607860  102247 command_runner.go:130] >       "spec": null,
	I0817 21:29:43.607864  102247 command_runner.go:130] >       "pinned": false
	I0817 21:29:43.607867  102247 command_runner.go:130] >     }
	I0817 21:29:43.607870  102247 command_runner.go:130] >   ]
	I0817 21:29:43.607874  102247 command_runner.go:130] > }
	I0817 21:29:43.609399  102247 crio.go:496] all images are preloaded for cri-o runtime.
	I0817 21:29:43.609416  102247 cache_images.go:84] Images are preloaded, skipping loading
	I0817 21:29:43.609469  102247 ssh_runner.go:195] Run: crio config
	I0817 21:29:43.645866  102247 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:29:43.645911  102247 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:29:43.645924  102247 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:29:43.645930  102247 command_runner.go:130] > #
	I0817 21:29:43.645952  102247 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:29:43.645967  102247 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:29:43.645981  102247 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:29:43.646000  102247 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:29:43.646013  102247 command_runner.go:130] > # reload'.
	I0817 21:29:43.646024  102247 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:29:43.646034  102247 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:29:43.646045  102247 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:29:43.646054  102247 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:29:43.646064  102247 command_runner.go:130] > [crio]
	I0817 21:29:43.646075  102247 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:29:43.646086  102247 command_runner.go:130] > # containers images, in this directory.
	I0817 21:29:43.646097  102247 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0817 21:29:43.646112  102247 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:29:43.646121  102247 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0817 21:29:43.646131  102247 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:29:43.646144  102247 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:29:43.646152  102247 command_runner.go:130] > # storage_driver = "vfs"
	I0817 21:29:43.646161  102247 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:29:43.646170  102247 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:29:43.646177  102247 command_runner.go:130] > # storage_option = [
	I0817 21:29:43.646183  102247 command_runner.go:130] > # ]
	I0817 21:29:43.646194  102247 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:29:43.646208  102247 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:29:43.646217  102247 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:29:43.646230  102247 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:29:43.646243  102247 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:29:43.646255  102247 command_runner.go:130] > # always happen on a node reboot
	I0817 21:29:43.646263  102247 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:29:43.646277  102247 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:29:43.646290  102247 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:29:43.646307  102247 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:29:43.646319  102247 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:29:43.646334  102247 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:29:43.646350  102247 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:29:43.646359  102247 command_runner.go:130] > # internal_wipe = true
	I0817 21:29:43.646372  102247 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:29:43.646387  102247 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:29:43.646397  102247 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:29:43.646423  102247 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:29:43.646445  102247 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:29:43.646451  102247 command_runner.go:130] > [crio.api]
	I0817 21:29:43.646460  102247 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:29:43.646472  102247 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:29:43.646485  102247 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:29:43.646496  102247 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:29:43.646507  102247 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:29:43.646519  102247 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:29:43.646525  102247 command_runner.go:130] > # stream_port = "0"
	I0817 21:29:43.646540  102247 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:29:43.646551  102247 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:29:43.646565  102247 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:29:43.646578  102247 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:29:43.646592  102247 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:29:43.646607  102247 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:29:43.646613  102247 command_runner.go:130] > # minutes.
	I0817 21:29:43.646620  102247 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:29:43.646633  102247 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:29:43.646647  102247 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:29:43.646657  102247 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:29:43.646667  102247 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:29:43.646681  102247 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:29:43.646693  102247 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:29:43.646734  102247 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:29:43.646751  102247 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:29:43.646762  102247 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0817 21:29:43.646775  102247 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:29:43.646786  102247 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0817 21:29:43.646806  102247 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:29:43.646820  102247 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:29:43.646830  102247 command_runner.go:130] > [crio.runtime]
	I0817 21:29:43.646840  102247 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:29:43.646852  102247 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:29:43.646862  102247 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:29:43.646873  102247 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:29:43.646882  102247 command_runner.go:130] > # default_ulimits = [
	I0817 21:29:43.646887  102247 command_runner.go:130] > # ]
	I0817 21:29:43.646896  102247 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:29:43.646906  102247 command_runner.go:130] > # no_pivot = false
	I0817 21:29:43.646915  102247 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:29:43.646926  102247 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:29:43.646946  102247 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:29:43.646955  102247 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:29:43.646967  102247 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:29:43.646981  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:29:43.646991  102247 command_runner.go:130] > # conmon = ""
	I0817 21:29:43.646999  102247 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:29:43.647008  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:29:43.647019  102247 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:29:43.647030  102247 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:29:43.647039  102247 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:29:43.647051  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:29:43.647060  102247 command_runner.go:130] > # conmon_env = [
	I0817 21:29:43.647065  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647076  102247 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:29:43.647088  102247 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:29:43.647098  102247 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:29:43.647107  102247 command_runner.go:130] > # default_env = [
	I0817 21:29:43.647111  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647117  102247 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:29:43.647123  102247 command_runner.go:130] > # selinux = false
	I0817 21:29:43.647129  102247 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:29:43.647137  102247 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:29:43.647142  102247 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:29:43.647149  102247 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:29:43.647154  102247 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:29:43.647162  102247 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:29:43.647169  102247 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:29:43.647176  102247 command_runner.go:130] > # which might increase security.
	I0817 21:29:43.647180  102247 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0817 21:29:43.647186  102247 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:29:43.647194  102247 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:29:43.647202  102247 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:29:43.647209  102247 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:29:43.647216  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:29:43.647220  102247 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:29:43.647231  102247 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:29:43.647238  102247 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:29:43.647242  102247 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:29:43.647250  102247 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:29:43.647256  102247 command_runner.go:130] > # irqbalance daemon.
	I0817 21:29:43.647261  102247 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:29:43.647271  102247 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:29:43.647278  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:29:43.647283  102247 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:29:43.647291  102247 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:29:43.647298  102247 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:29:43.647303  102247 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:29:43.647310  102247 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:29:43.647316  102247 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:29:43.647324  102247 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:29:43.647330  102247 command_runner.go:130] > # will be added.
	I0817 21:29:43.647334  102247 command_runner.go:130] > # default_capabilities = [
	I0817 21:29:43.647340  102247 command_runner.go:130] > # 	"CHOWN",
	I0817 21:29:43.647344  102247 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:29:43.647348  102247 command_runner.go:130] > # 	"FSETID",
	I0817 21:29:43.647354  102247 command_runner.go:130] > # 	"FOWNER",
	I0817 21:29:43.647358  102247 command_runner.go:130] > # 	"SETGID",
	I0817 21:29:43.647364  102247 command_runner.go:130] > # 	"SETUID",
	I0817 21:29:43.647367  102247 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:29:43.647374  102247 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:29:43.647377  102247 command_runner.go:130] > # 	"KILL",
	I0817 21:29:43.647382  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647389  102247 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0817 21:29:43.647397  102247 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0817 21:29:43.647408  102247 command_runner.go:130] > # add_inheritable_capabilities = true
	I0817 21:29:43.647416  102247 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:29:43.647451  102247 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:29:43.647461  102247 command_runner.go:130] > # default_sysctls = [
	I0817 21:29:43.647466  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647477  102247 command_runner.go:130] > # List of devices on the host that a
	I0817 21:29:43.647489  102247 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:29:43.647500  102247 command_runner.go:130] > # allowed_devices = [
	I0817 21:29:43.647509  102247 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:29:43.647515  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647520  102247 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:29:43.647560  102247 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:29:43.647571  102247 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:29:43.647577  102247 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:29:43.647586  102247 command_runner.go:130] > # additional_devices = [
	I0817 21:29:43.647593  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647598  102247 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:29:43.647605  102247 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:29:43.647608  102247 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:29:43.647615  102247 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:29:43.647619  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647627  102247 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:29:43.647633  102247 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:29:43.647639  102247 command_runner.go:130] > # Defaults to false.
	I0817 21:29:43.647645  102247 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:29:43.647653  102247 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:29:43.647661  102247 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:29:43.647665  102247 command_runner.go:130] > # hooks_dir = [
	I0817 21:29:43.647671  102247 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:29:43.647675  102247 command_runner.go:130] > # ]
	I0817 21:29:43.647683  102247 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:29:43.647689  102247 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:29:43.647697  102247 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:29:43.647703  102247 command_runner.go:130] > #
	I0817 21:29:43.647709  102247 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:29:43.647717  102247 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:29:43.647724  102247 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:29:43.647727  102247 command_runner.go:130] > #
	I0817 21:29:43.647735  102247 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:29:43.647744  102247 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:29:43.647750  102247 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:29:43.647757  102247 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:29:43.647760  102247 command_runner.go:130] > #
	I0817 21:29:43.647766  102247 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:29:43.647771  102247 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:29:43.647780  102247 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:29:43.647786  102247 command_runner.go:130] > # pids_limit = 0
	I0817 21:29:43.647794  102247 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:29:43.647802  102247 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:29:43.647810  102247 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:29:43.647820  102247 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:29:43.647827  102247 command_runner.go:130] > # log_size_max = -1
	I0817 21:29:43.647833  102247 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:29:43.647840  102247 command_runner.go:130] > # log_to_journald = false
	I0817 21:29:43.647848  102247 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:29:43.647855  102247 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:29:43.647860  102247 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:29:43.647867  102247 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:29:43.647872  102247 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:29:43.647878  102247 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:29:43.647884  102247 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:29:43.647890  102247 command_runner.go:130] > # read_only = false
	I0817 21:29:43.647896  102247 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:29:43.647904  102247 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:29:43.647910  102247 command_runner.go:130] > # live configuration reload.
	I0817 21:29:43.647914  102247 command_runner.go:130] > # log_level = "info"
	I0817 21:29:43.647919  102247 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:29:43.647927  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:29:43.647930  102247 command_runner.go:130] > # log_filter = ""
	I0817 21:29:43.647938  102247 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:29:43.647947  102247 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:29:43.647954  102247 command_runner.go:130] > # separated by comma.
	I0817 21:29:43.647958  102247 command_runner.go:130] > # uid_mappings = ""
	I0817 21:29:43.647966  102247 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:29:43.647974  102247 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:29:43.647979  102247 command_runner.go:130] > # separated by comma.
	I0817 21:29:43.647983  102247 command_runner.go:130] > # gid_mappings = ""
	I0817 21:29:43.647990  102247 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:29:43.647999  102247 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:29:43.648004  102247 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:29:43.648011  102247 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:29:43.648016  102247 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:29:43.648025  102247 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:29:43.648033  102247 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:29:43.648039  102247 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:29:43.648069  102247 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:29:43.648082  102247 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:29:43.648095  102247 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:29:43.648105  102247 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:29:43.648118  102247 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:29:43.648128  102247 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:29:43.648139  102247 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:29:43.648146  102247 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:29:43.648150  102247 command_runner.go:130] > # drop_infra_ctr = true
	I0817 21:29:43.648159  102247 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:29:43.648165  102247 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:29:43.648174  102247 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:29:43.648180  102247 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:29:43.648185  102247 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:29:43.648192  102247 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:29:43.648197  102247 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:29:43.648207  102247 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:29:43.648214  102247 command_runner.go:130] > # pinns_path = ""
	I0817 21:29:43.648220  102247 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:29:43.648228  102247 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:29:43.648236  102247 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:29:43.648241  102247 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:29:43.648246  102247 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:29:43.648255  102247 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:29:43.648266  102247 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:29:43.648273  102247 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:29:43.648280  102247 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:29:43.648287  102247 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:29:43.648292  102247 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:29:43.648297  102247 command_runner.go:130] > # ]
	I0817 21:29:43.648303  102247 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:29:43.648311  102247 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:29:43.648319  102247 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:29:43.648325  102247 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:29:43.648331  102247 command_runner.go:130] > #
	I0817 21:29:43.648335  102247 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:29:43.648342  102247 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:29:43.648346  102247 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:29:43.648353  102247 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:29:43.648360  102247 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:29:43.648365  102247 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:29:43.648370  102247 command_runner.go:130] > # Where:
	I0817 21:29:43.648375  102247 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:29:43.648386  102247 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:29:43.648394  102247 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:29:43.648403  102247 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:29:43.648412  102247 command_runner.go:130] > #   in $PATH.
	I0817 21:29:43.648418  102247 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:29:43.648425  102247 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:29:43.648434  102247 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:29:43.648440  102247 command_runner.go:130] > #   state.
	I0817 21:29:43.648446  102247 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:29:43.648454  102247 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:29:43.648462  102247 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:29:43.648469  102247 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:29:43.648476  102247 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:29:43.648484  102247 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:29:43.648491  102247 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:29:43.648497  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:29:43.648505  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:29:43.648513  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:29:43.648521  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:29:43.648528  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:29:43.648536  102247 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:29:43.648544  102247 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:29:43.648553  102247 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:29:43.648579  102247 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:29:43.648589  102247 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:29:43.648600  102247 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0817 21:29:43.648610  102247 command_runner.go:130] > runtime_type = "oci"
	I0817 21:29:43.648620  102247 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:29:43.648628  102247 command_runner.go:130] > runtime_config_path = ""
	I0817 21:29:43.648635  102247 command_runner.go:130] > monitor_path = ""
	I0817 21:29:43.648639  102247 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:29:43.648646  102247 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:29:43.648670  102247 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:29:43.648676  102247 command_runner.go:130] > # running containers
	I0817 21:29:43.648681  102247 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:29:43.648689  102247 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:29:43.648699  102247 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:29:43.648706  102247 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:29:43.648712  102247 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:29:43.648719  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:29:43.648723  102247 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:29:43.648731  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:29:43.648736  102247 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:29:43.648742  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:29:43.648748  102247 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:29:43.648755  102247 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:29:43.648764  102247 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:29:43.648774  102247 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:29:43.648783  102247 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:29:43.648792  102247 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:29:43.648802  102247 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:29:43.648812  102247 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:29:43.648819  102247 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:29:43.648829  102247 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:29:43.648835  102247 command_runner.go:130] > # Example:
	I0817 21:29:43.648839  102247 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:29:43.648846  102247 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:29:43.648851  102247 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:29:43.648858  102247 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:29:43.648862  102247 command_runner.go:130] > # cpuset = 0
	I0817 21:29:43.648866  102247 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:29:43.648870  102247 command_runner.go:130] > # Where:
	I0817 21:29:43.648875  102247 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:29:43.648884  102247 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:29:43.648893  102247 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:29:43.648898  102247 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:29:43.648909  102247 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:29:43.648917  102247 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:29:43.648922  102247 command_runner.go:130] > # 
	I0817 21:29:43.648929  102247 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:29:43.648934  102247 command_runner.go:130] > #
	I0817 21:29:43.648942  102247 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:29:43.648950  102247 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:29:43.648958  102247 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:29:43.648967  102247 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:29:43.648974  102247 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:29:43.648978  102247 command_runner.go:130] > [crio.image]
	I0817 21:29:43.648984  102247 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:29:43.648990  102247 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:29:43.648996  102247 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:29:43.649004  102247 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:29:43.649009  102247 command_runner.go:130] > # global_auth_file = ""
	I0817 21:29:43.649014  102247 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:29:43.649021  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:29:43.649026  102247 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:29:43.649034  102247 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:29:43.649042  102247 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:29:43.649049  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:29:43.649056  102247 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:29:43.649061  102247 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:29:43.649069  102247 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:29:43.649078  102247 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:29:43.649086  102247 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:29:43.649092  102247 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:29:43.649097  102247 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:29:43.649126  102247 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:29:43.649140  102247 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:29:43.649153  102247 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:29:43.649164  102247 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:29:43.649175  102247 command_runner.go:130] > # signature_policy = ""
	I0817 21:29:43.649185  102247 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:29:43.649193  102247 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:29:43.649200  102247 command_runner.go:130] > # changing them here.
	I0817 21:29:43.649205  102247 command_runner.go:130] > # insecure_registries = [
	I0817 21:29:43.649210  102247 command_runner.go:130] > # ]
	I0817 21:29:43.649217  102247 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:29:43.649224  102247 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:29:43.649251  102247 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:29:43.649262  102247 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:29:43.649267  102247 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:29:43.649276  102247 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:29:43.649282  102247 command_runner.go:130] > # CNI plugins.
	I0817 21:29:43.649286  102247 command_runner.go:130] > [crio.network]
	I0817 21:29:43.649294  102247 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:29:43.649301  102247 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:29:43.649306  102247 command_runner.go:130] > # cni_default_network = ""
	I0817 21:29:43.649313  102247 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:29:43.649320  102247 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:29:43.649326  102247 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:29:43.649332  102247 command_runner.go:130] > # plugin_dirs = [
	I0817 21:29:43.649336  102247 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:29:43.649342  102247 command_runner.go:130] > # ]
	I0817 21:29:43.649348  102247 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:29:43.649354  102247 command_runner.go:130] > [crio.metrics]
	I0817 21:29:43.649359  102247 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:29:43.649365  102247 command_runner.go:130] > # enable_metrics = false
	I0817 21:29:43.649370  102247 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:29:43.649377  102247 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:29:43.649383  102247 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:29:43.649391  102247 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:29:43.649398  102247 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:29:43.649408  102247 command_runner.go:130] > # metrics_collectors = [
	I0817 21:29:43.649414  102247 command_runner.go:130] > # 	"operations",
	I0817 21:29:43.649419  102247 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:29:43.649426  102247 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:29:43.649430  102247 command_runner.go:130] > # 	"operations_errors",
	I0817 21:29:43.649437  102247 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:29:43.649441  102247 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:29:43.649448  102247 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:29:43.649455  102247 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:29:43.649462  102247 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:29:43.649466  102247 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:29:43.649473  102247 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:29:43.649477  102247 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:29:43.649483  102247 command_runner.go:130] > # 	"containers_oom",
	I0817 21:29:43.649487  102247 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:29:43.649493  102247 command_runner.go:130] > # 	"operations_total",
	I0817 21:29:43.649497  102247 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:29:43.649504  102247 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:29:43.649508  102247 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:29:43.649515  102247 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:29:43.649519  102247 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:29:43.649528  102247 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:29:43.649535  102247 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:29:43.649539  102247 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:29:43.649545  102247 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:29:43.649548  102247 command_runner.go:130] > # ]
	I0817 21:29:43.649556  102247 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:29:43.649563  102247 command_runner.go:130] > # metrics_port = 9090
	I0817 21:29:43.649568  102247 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:29:43.649574  102247 command_runner.go:130] > # metrics_socket = ""
	I0817 21:29:43.649579  102247 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:29:43.649587  102247 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:29:43.649594  102247 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:29:43.649602  102247 command_runner.go:130] > # certificate on any modification event.
	I0817 21:29:43.649606  102247 command_runner.go:130] > # metrics_cert = ""
	I0817 21:29:43.649614  102247 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:29:43.649618  102247 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:29:43.649624  102247 command_runner.go:130] > # metrics_key = ""
	I0817 21:29:43.649630  102247 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:29:43.649636  102247 command_runner.go:130] > [crio.tracing]
	I0817 21:29:43.649641  102247 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:29:43.649647  102247 command_runner.go:130] > # enable_tracing = false
	I0817 21:29:43.649658  102247 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:29:43.649664  102247 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:29:43.649672  102247 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:29:43.649679  102247 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:29:43.649685  102247 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:29:43.649691  102247 command_runner.go:130] > [crio.stats]
	I0817 21:29:43.649696  102247 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:29:43.649704  102247 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:29:43.649711  102247 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:29:43.651294  102247 command_runner.go:130] ! time="2023-08-17 21:29:43.643117720Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0817 21:29:43.651325  102247 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:29:43.651396  102247 cni.go:84] Creating CNI manager for ""
	I0817 21:29:43.651414  102247 cni.go:136] 1 nodes found, recommending kindnet
	I0817 21:29:43.651433  102247 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:29:43.651455  102247 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-938028 NodeName:multinode-938028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:29:43.651570  102247 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-938028"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:29:43.651626  102247 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-938028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:29:43.651668  102247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:29:43.658877  102247 command_runner.go:130] > kubeadm
	I0817 21:29:43.658893  102247 command_runner.go:130] > kubectl
	I0817 21:29:43.658899  102247 command_runner.go:130] > kubelet
	I0817 21:29:43.659550  102247 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:29:43.659615  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0817 21:29:43.666935  102247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0817 21:29:43.681503  102247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:29:43.696358  102247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0817 21:29:43.710801  102247 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:29:43.713586  102247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:29:43.722654  102247 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028 for IP: 192.168.58.2
	I0817 21:29:43.722679  102247 certs.go:190] acquiring lock for shared ca certs: {Name:mkccb042866dbfd72de305663f91f6bc6da7b7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:43.722795  102247 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key
	I0817 21:29:43.722832  102247 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key
	I0817 21:29:43.722869  102247 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key
	I0817 21:29:43.722880  102247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt with IP's: []
	I0817 21:29:43.871517  102247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt ...
	I0817 21:29:43.871550  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt: {Name:mkfc938d4243077d4aaa3e9cf41ac962846d21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:43.871734  102247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key ...
	I0817 21:29:43.871752  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key: {Name:mk1f8b2c9d15587f0cc34613ef312b919f12aa29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:43.871847  102247 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key.cee25041
	I0817 21:29:43.871866  102247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0817 21:29:44.014940  102247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt.cee25041 ...
	I0817 21:29:44.014972  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt.cee25041: {Name:mk17d6c02f720c850042285bb8308c476ad69854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:44.015142  102247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key.cee25041 ...
	I0817 21:29:44.015160  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key.cee25041: {Name:mk48cb40e6e967a5f39baa8151b8290fe8329582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:44.015251  102247 certs.go:337] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt
	I0817 21:29:44.015338  102247 certs.go:341] copying /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key
	I0817 21:29:44.015402  102247 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.key
	I0817 21:29:44.015422  102247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.crt with IP's: []
	I0817 21:29:44.244009  102247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.crt ...
	I0817 21:29:44.244039  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.crt: {Name:mkf7c8ff4751aa6b263527001de1ed44f8a923a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:44.244214  102247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.key ...
	I0817 21:29:44.244232  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.key: {Name:mk79acbc749e6ef9e5c79b03b2684b5132475d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:29:44.244320  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0817 21:29:44.244393  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0817 21:29:44.244415  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0817 21:29:44.244433  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0817 21:29:44.244449  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:29:44.244462  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:29:44.244479  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:29:44.244498  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:29:44.244566  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem (1338 bytes)
	W0817 21:29:44.244611  102247 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504_empty.pem, impossibly tiny 0 bytes
	I0817 21:29:44.244628  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:29:44.244662  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:29:44.244692  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:29:44.244729  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem (1679 bytes)
	I0817 21:29:44.244785  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:29:44.244821  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /usr/share/ca-certificates/175042.pem
	I0817 21:29:44.244844  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:29:44.244861  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem -> /usr/share/ca-certificates/17504.pem
	I0817 21:29:44.245374  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0817 21:29:44.267531  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0817 21:29:44.287629  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0817 21:29:44.307319  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0817 21:29:44.326546  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:29:44.346063  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 21:29:44.365646  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:29:44.386619  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 21:29:44.406099  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /usr/share/ca-certificates/175042.pem (1708 bytes)
	I0817 21:29:44.425697  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:29:44.444923  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem --> /usr/share/ca-certificates/17504.pem (1338 bytes)
	I0817 21:29:44.464423  102247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0817 21:29:44.478630  102247 ssh_runner.go:195] Run: openssl version
	I0817 21:29:44.482990  102247 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0817 21:29:44.483137  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17504.pem && ln -fs /usr/share/ca-certificates/17504.pem /etc/ssl/certs/17504.pem"
	I0817 21:29:44.490918  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17504.pem
	I0817 21:29:44.493843  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:16 /usr/share/ca-certificates/17504.pem
	I0817 21:29:44.493865  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:16 /usr/share/ca-certificates/17504.pem
	I0817 21:29:44.493891  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17504.pem
	I0817 21:29:44.499727  102247 command_runner.go:130] > 51391683
	I0817 21:29:44.499780  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17504.pem /etc/ssl/certs/51391683.0"
	I0817 21:29:44.507524  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175042.pem && ln -fs /usr/share/ca-certificates/175042.pem /etc/ssl/certs/175042.pem"
	I0817 21:29:44.515301  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175042.pem
	I0817 21:29:44.518188  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:16 /usr/share/ca-certificates/175042.pem
	I0817 21:29:44.518244  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:16 /usr/share/ca-certificates/175042.pem
	I0817 21:29:44.518281  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175042.pem
	I0817 21:29:44.523918  102247 command_runner.go:130] > 3ec20f2e
	I0817 21:29:44.524110  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175042.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:29:44.531765  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:29:44.539413  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:29:44.542151  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:29:44.542175  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:29:44.542208  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:29:44.547757  102247 command_runner.go:130] > b5213941
	I0817 21:29:44.547935  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:29:44.555615  102247 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:29:44.558333  102247 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:29:44.558387  102247 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:29:44.558429  102247 kubeadm.go:404] StartCluster: {Name:multinode-938028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:29:44.558516  102247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0817 21:29:44.558571  102247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0817 21:29:44.589311  102247 cri.go:89] found id: ""
	I0817 21:29:44.589390  102247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0817 21:29:44.596476  102247 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0817 21:29:44.596508  102247 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0817 21:29:44.596518  102247 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0817 21:29:44.597121  102247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0817 21:29:44.604416  102247 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0817 21:29:44.604459  102247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0817 21:29:44.611248  102247 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0817 21:29:44.611269  102247 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0817 21:29:44.611276  102247 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0817 21:29:44.611287  102247 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:29:44.611952  102247 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0817 21:29:44.611992  102247 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0817 21:29:44.653938  102247 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0817 21:29:44.653969  102247 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0817 21:29:44.654024  102247 kubeadm.go:322] [preflight] Running pre-flight checks
	I0817 21:29:44.654036  102247 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:29:44.686521  102247 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:29:44.686554  102247 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:29:44.686624  102247 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0817 21:29:44.686641  102247 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-gcp
	I0817 21:29:44.686689  102247 kubeadm.go:322] OS: Linux
	I0817 21:29:44.686700  102247 command_runner.go:130] > OS: Linux
	I0817 21:29:44.686798  102247 kubeadm.go:322] CGROUPS_CPU: enabled
	I0817 21:29:44.686815  102247 command_runner.go:130] > CGROUPS_CPU: enabled
	I0817 21:29:44.686900  102247 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0817 21:29:44.686918  102247 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0817 21:29:44.686995  102247 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0817 21:29:44.687005  102247 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0817 21:29:44.687074  102247 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0817 21:29:44.687086  102247 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0817 21:29:44.687151  102247 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0817 21:29:44.687161  102247 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0817 21:29:44.687237  102247 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0817 21:29:44.687262  102247 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0817 21:29:44.687326  102247 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0817 21:29:44.687334  102247 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0817 21:29:44.687398  102247 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0817 21:29:44.687415  102247 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0817 21:29:44.687487  102247 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0817 21:29:44.687502  102247 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0817 21:29:44.746946  102247 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:29:44.746974  102247 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0817 21:29:44.747082  102247 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:29:44.747104  102247 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0817 21:29:44.747179  102247 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:29:44.747186  102247 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0817 21:29:44.934018  102247 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:29:44.934037  102247 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0817 21:29:44.935900  102247 out.go:204]   - Generating certificates and keys ...
	I0817 21:29:44.935988  102247 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0817 21:29:44.936023  102247 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0817 21:29:44.936130  102247 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0817 21:29:44.936141  102247 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0817 21:29:45.060022  102247 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:29:45.060054  102247 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0817 21:29:45.225853  102247 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:29:45.225882  102247 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0817 21:29:45.303121  102247 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0817 21:29:45.303157  102247 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0817 21:29:45.520176  102247 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0817 21:29:45.520204  102247 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0817 21:29:45.596385  102247 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0817 21:29:45.596429  102247 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0817 21:29:45.596582  102247 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-938028] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0817 21:29:45.596604  102247 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-938028] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0817 21:29:45.750095  102247 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0817 21:29:45.750135  102247 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0817 21:29:45.750315  102247 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-938028] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0817 21:29:45.750329  102247 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-938028] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0817 21:29:45.822573  102247 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:29:45.822600  102247 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0817 21:29:46.071627  102247 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:29:46.071673  102247 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0817 21:29:46.158317  102247 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0817 21:29:46.158343  102247 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0817 21:29:46.158481  102247 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:29:46.158494  102247 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0817 21:29:46.266726  102247 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:29:46.266782  102247 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0817 21:29:46.595396  102247 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:29:46.595421  102247 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0817 21:29:46.862688  102247 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:29:46.862712  102247 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0817 21:29:47.047727  102247 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:29:47.047757  102247 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0817 21:29:47.055449  102247 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:29:47.055471  102247 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:29:47.056228  102247 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:29:47.056248  102247 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:29:47.056289  102247 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0817 21:29:47.056316  102247 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:29:47.129188  102247 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:29:47.131857  102247 out.go:204]   - Booting up control plane ...
	I0817 21:29:47.129290  102247 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0817 21:29:47.131980  102247 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:29:47.131993  102247 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0817 21:29:47.132507  102247 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:29:47.132531  102247 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0817 21:29:47.133442  102247 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:29:47.133456  102247 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0817 21:29:47.134172  102247 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:29:47.134193  102247 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0817 21:29:47.136045  102247 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:29:47.136076  102247 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0817 21:29:52.638476  102247 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502400 seconds
	I0817 21:29:52.638505  102247 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.502400 seconds
	I0817 21:29:52.638667  102247 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:29:52.638696  102247 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0817 21:29:52.649828  102247 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:29:52.649878  102247 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0817 21:29:53.170548  102247 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:29:53.170572  102247 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0817 21:29:53.170708  102247 kubeadm.go:322] [mark-control-plane] Marking the node multinode-938028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:29:53.170740  102247 command_runner.go:130] > [mark-control-plane] Marking the node multinode-938028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0817 21:29:53.680280  102247 kubeadm.go:322] [bootstrap-token] Using token: 05muy2.jvisx3fsc1fuppe8
	I0817 21:29:53.681822  102247 out.go:204]   - Configuring RBAC rules ...
	I0817 21:29:53.680328  102247 command_runner.go:130] > [bootstrap-token] Using token: 05muy2.jvisx3fsc1fuppe8
	I0817 21:29:53.681972  102247 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:29:53.681990  102247 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0817 21:29:53.688455  102247 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:29:53.688475  102247 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0817 21:29:53.694282  102247 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:29:53.694299  102247 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0817 21:29:53.697089  102247 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:29:53.697108  102247 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0817 21:29:53.700649  102247 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:29:53.700669  102247 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0817 21:29:53.703186  102247 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:29:53.703201  102247 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0817 21:29:53.713232  102247 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:29:53.713252  102247 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0817 21:29:53.932857  102247 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0817 21:29:53.932883  102247 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0817 21:29:54.127044  102247 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0817 21:29:54.127086  102247 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0817 21:29:54.128166  102247 kubeadm.go:322] 
	I0817 21:29:54.128250  102247 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0817 21:29:54.128263  102247 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0817 21:29:54.128269  102247 kubeadm.go:322] 
	I0817 21:29:54.128389  102247 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0817 21:29:54.128409  102247 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0817 21:29:54.128417  102247 kubeadm.go:322] 
	I0817 21:29:54.128448  102247 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0817 21:29:54.128458  102247 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0817 21:29:54.128532  102247 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:29:54.128544  102247 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0817 21:29:54.128624  102247 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:29:54.128651  102247 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0817 21:29:54.128658  102247 kubeadm.go:322] 
	I0817 21:29:54.128728  102247 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0817 21:29:54.128745  102247 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0817 21:29:54.128750  102247 kubeadm.go:322] 
	I0817 21:29:54.128813  102247 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:29:54.128823  102247 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0817 21:29:54.128829  102247 kubeadm.go:322] 
	I0817 21:29:54.128901  102247 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0817 21:29:54.128912  102247 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0817 21:29:54.129015  102247 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:29:54.129031  102247 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0817 21:29:54.129122  102247 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:29:54.129131  102247 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0817 21:29:54.129136  102247 kubeadm.go:322] 
	I0817 21:29:54.129249  102247 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:29:54.129260  102247 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0817 21:29:54.129371  102247 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0817 21:29:54.129382  102247 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0817 21:29:54.129388  102247 kubeadm.go:322] 
	I0817 21:29:54.129507  102247 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 05muy2.jvisx3fsc1fuppe8 \
	I0817 21:29:54.129521  102247 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 05muy2.jvisx3fsc1fuppe8 \
	I0817 21:29:54.129684  102247 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 \
	I0817 21:29:54.129695  102247 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 \
	I0817 21:29:54.129723  102247 kubeadm.go:322] 	--control-plane 
	I0817 21:29:54.129733  102247 command_runner.go:130] > 	--control-plane 
	I0817 21:29:54.129739  102247 kubeadm.go:322] 
	I0817 21:29:54.129864  102247 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:29:54.129876  102247 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0817 21:29:54.129914  102247 kubeadm.go:322] 
	I0817 21:29:54.130026  102247 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 05muy2.jvisx3fsc1fuppe8 \
	I0817 21:29:54.130037  102247 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 05muy2.jvisx3fsc1fuppe8 \
	I0817 21:29:54.130175  102247 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 
	I0817 21:29:54.130185  102247 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 
	I0817 21:29:54.132060  102247 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0817 21:29:54.132090  102247 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0817 21:29:54.132250  102247 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:29:54.132281  102247 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:29:54.132356  102247 cni.go:84] Creating CNI manager for ""
	I0817 21:29:54.132375  102247 cni.go:136] 1 nodes found, recommending kindnet
	I0817 21:29:54.134009  102247 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0817 21:29:54.135288  102247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:29:54.139498  102247 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:29:54.139520  102247 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0817 21:29:54.139530  102247 command_runner.go:130] > Device: 33h/51d	Inode: 838607      Links: 1
	I0817 21:29:54.139547  102247 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:29:54.139558  102247 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0817 21:29:54.139566  102247 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0817 21:29:54.139573  102247 command_runner.go:130] > Change: 2023-08-17 21:10:53.952483634 +0000
	I0817 21:29:54.139581  102247 command_runner.go:130] >  Birth: 2023-08-17 21:10:53.932481714 +0000
	I0817 21:29:54.139629  102247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:29:54.139638  102247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:29:54.155648  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:29:54.791749  102247 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0817 21:29:54.796530  102247 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0817 21:29:54.802766  102247 command_runner.go:130] > serviceaccount/kindnet created
	I0817 21:29:54.811966  102247 command_runner.go:130] > daemonset.apps/kindnet created
	I0817 21:29:54.816057  102247 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0817 21:29:54.816126  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:54.816180  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612 minikube.k8s.io/name=multinode-938028 minikube.k8s.io/updated_at=2023_08_17T21_29_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:54.823190  102247 command_runner.go:130] > -16
	I0817 21:29:54.823247  102247 ops.go:34] apiserver oom_adj: -16
	I0817 21:29:54.888279  102247 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0817 21:29:54.888396  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:54.927086  102247 command_runner.go:130] > node/multinode-938028 labeled
	I0817 21:29:54.981433  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:54.981541  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:55.047767  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:55.548551  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:55.607802  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:56.048568  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:56.109539  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:56.548637  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:56.611829  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:57.048182  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:57.108692  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:57.548693  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:57.612544  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:58.048082  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:58.111245  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:58.548856  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:58.611016  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:59.048701  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:59.107654  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:29:59.548809  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:29:59.608746  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:00.048077  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:00.109498  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:00.548740  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:00.613384  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:01.047912  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:01.111362  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:01.547926  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:01.611434  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:02.048034  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:02.106910  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:02.548232  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:02.609982  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:03.047938  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:03.114529  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:03.548151  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:03.608573  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:04.048914  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:04.115470  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:04.548033  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:04.612486  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:05.048934  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:05.112599  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:05.548146  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:05.607879  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:06.048909  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:06.107336  102247 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0817 21:30:06.548945  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0817 21:30:06.737465  102247 command_runner.go:130] > NAME      SECRETS   AGE
	I0817 21:30:06.737492  102247 command_runner.go:130] > default   0         0s
	I0817 21:30:06.737520  102247 kubeadm.go:1081] duration metric: took 11.921448215s to wait for elevateKubeSystemPrivileges.
	I0817 21:30:06.737547  102247 kubeadm.go:406] StartCluster complete in 22.17912052s
	I0817 21:30:06.737573  102247 settings.go:142] acquiring lock: {Name:mkab7abc846835e928b69a2120c7e34b55f8acdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:30:06.737650  102247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:06.740393  102247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/kubeconfig: {Name:mk8d25353b4b324f395053b70676ed1b624da94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:30:06.740657  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0817 21:30:06.740800  102247 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0817 21:30:06.740888  102247 addons.go:69] Setting storage-provisioner=true in profile "multinode-938028"
	I0817 21:30:06.740902  102247 addons.go:69] Setting default-storageclass=true in profile "multinode-938028"
	I0817 21:30:06.740908  102247 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:30:06.740933  102247 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-938028"
	I0817 21:30:06.740909  102247 addons.go:231] Setting addon storage-provisioner=true in "multinode-938028"
	I0817 21:30:06.741040  102247 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:06.741305  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:30:06.741045  102247 host.go:66] Checking if "multinode-938028" exists ...
	I0817 21:30:06.741386  102247 kapi.go:59] client config for multinode-938028: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:30:06.741877  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:30:06.742403  102247 cert_rotation.go:137] Starting client certificate rotation controller
	I0817 21:30:06.742671  102247 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:30:06.742690  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.742703  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.742713  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.752419  102247 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0817 21:30:06.752442  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.752452  102247 round_trippers.go:580]     Audit-Id: 7104b309-fa55-4277-9480-611e640e99a4
	I0817 21:30:06.752462  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.752471  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.752479  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.752487  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.752497  102247 round_trippers.go:580]     Content-Length: 291
	I0817 21:30:06.752505  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.752539  102247 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7aa8886c-0fdb-4400-87f8-3d24dd96a241","resourceVersion":"362","creationTimestamp":"2023-08-17T21:29:53Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0817 21:30:06.753071  102247 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7aa8886c-0fdb-4400-87f8-3d24dd96a241","resourceVersion":"362","creationTimestamp":"2023-08-17T21:29:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0817 21:30:06.753131  102247 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:30:06.753143  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.753154  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.753165  102247 round_trippers.go:473]     Content-Type: application/json
	I0817 21:30:06.753175  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.761388  102247 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0817 21:30:06.759901  102247 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0817 21:30:06.762912  102247 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:30:06.762932  102247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0817 21:30:06.762985  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:30:06.761511  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.763224  102247 round_trippers.go:580]     Content-Length: 291
	I0817 21:30:06.763240  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.763250  102247 round_trippers.go:580]     Audit-Id: 2b6d77ef-92dd-4cb8-a66b-87bff52fb2e3
	I0817 21:30:06.763263  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.763276  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.763289  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.763302  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.763330  102247 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7aa8886c-0fdb-4400-87f8-3d24dd96a241","resourceVersion":"376","creationTimestamp":"2023-08-17T21:29:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0817 21:30:06.763480  102247 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:30:06.763487  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.763498  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.763508  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.764966  102247 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:06.765241  102247 kapi.go:59] client config for multinode-938028: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:30:06.765553  102247 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0817 21:30:06.765566  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.765577  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.765587  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.766652  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:06.766679  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.766690  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.766705  102247 round_trippers.go:580]     Audit-Id: d4ac40a7-a86f-4a60-87c4-c6cc0c1e0118
	I0817 21:30:06.766718  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.766731  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.766744  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.766757  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.766769  102247 round_trippers.go:580]     Content-Length: 291
	I0817 21:30:06.766798  102247 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7aa8886c-0fdb-4400-87f8-3d24dd96a241","resourceVersion":"376","creationTimestamp":"2023-08-17T21:29:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0817 21:30:06.766893  102247 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-938028" context rescaled to 1 replicas
	I0817 21:30:06.766927  102247 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0817 21:30:06.768797  102247 out.go:177] * Verifying Kubernetes components...
	I0817 21:30:06.770355  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:30:06.769097  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:06.770425  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.770438  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.770448  102247 round_trippers.go:580]     Content-Length: 109
	I0817 21:30:06.770456  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.770464  102247 round_trippers.go:580]     Audit-Id: 07db7efd-1851-44b3-8c94-f7e78b4fa32a
	I0817 21:30:06.770471  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.770480  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.770493  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.770515  102247 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"377"},"items":[]}
	I0817 21:30:06.770778  102247 addons.go:231] Setting addon default-storageclass=true in "multinode-938028"
	I0817 21:30:06.770812  102247 host.go:66] Checking if "multinode-938028" exists ...
	I0817 21:30:06.771289  102247 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:30:06.785840  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:30:06.789400  102247 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0817 21:30:06.789415  102247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0817 21:30:06.789454  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:30:06.805892  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:30:06.935733  102247 command_runner.go:130] > apiVersion: v1
	I0817 21:30:06.935762  102247 command_runner.go:130] > data:
	I0817 21:30:06.935768  102247 command_runner.go:130] >   Corefile: |
	I0817 21:30:06.935773  102247 command_runner.go:130] >     .:53 {
	I0817 21:30:06.935778  102247 command_runner.go:130] >         errors
	I0817 21:30:06.935788  102247 command_runner.go:130] >         health {
	I0817 21:30:06.935795  102247 command_runner.go:130] >            lameduck 5s
	I0817 21:30:06.935800  102247 command_runner.go:130] >         }
	I0817 21:30:06.935806  102247 command_runner.go:130] >         ready
	I0817 21:30:06.935816  102247 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0817 21:30:06.935829  102247 command_runner.go:130] >            pods insecure
	I0817 21:30:06.935839  102247 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0817 21:30:06.935853  102247 command_runner.go:130] >            ttl 30
	I0817 21:30:06.935864  102247 command_runner.go:130] >         }
	I0817 21:30:06.935873  102247 command_runner.go:130] >         prometheus :9153
	I0817 21:30:06.935884  102247 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0817 21:30:06.935894  102247 command_runner.go:130] >            max_concurrent 1000
	I0817 21:30:06.935905  102247 command_runner.go:130] >         }
	I0817 21:30:06.935915  102247 command_runner.go:130] >         cache 30
	I0817 21:30:06.935922  102247 command_runner.go:130] >         loop
	I0817 21:30:06.935952  102247 command_runner.go:130] >         reload
	I0817 21:30:06.935958  102247 command_runner.go:130] >         loadbalance
	I0817 21:30:06.935963  102247 command_runner.go:130] >     }
	I0817 21:30:06.935970  102247 command_runner.go:130] > kind: ConfigMap
	I0817 21:30:06.935977  102247 command_runner.go:130] > metadata:
	I0817 21:30:06.935992  102247 command_runner.go:130] >   creationTimestamp: "2023-08-17T21:29:53Z"
	I0817 21:30:06.936002  102247 command_runner.go:130] >   name: coredns
	I0817 21:30:06.936011  102247 command_runner.go:130] >   namespace: kube-system
	I0817 21:30:06.936023  102247 command_runner.go:130] >   resourceVersion: "267"
	I0817 21:30:06.936029  102247 command_runner.go:130] >   uid: 3b87740a-18e7-468a-a6c3-b8b4807c1408
	I0817 21:30:06.936211  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0817 21:30:06.936513  102247 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:06.936783  102247 kapi.go:59] client config for multinode-938028: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:30:06.937047  102247 node_ready.go:35] waiting up to 6m0s for node "multinode-938028" to be "Ready" ...
	I0817 21:30:06.937129  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:06.937141  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.937151  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.937163  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.939044  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:06.939065  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.939074  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.939082  102247 round_trippers.go:580]     Audit-Id: 9daa1674-6d4a-4cb3-88e5-93f0c101fa6d
	I0817 21:30:06.939090  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.939097  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.939105  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.939113  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.939248  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"344","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0817 21:30:06.940019  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:06.940033  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:06.940043  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:06.940053  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:06.942650  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:06.942676  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:06.942687  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:06.942700  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:06.942714  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:06.942724  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:06.942733  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:06 GMT
	I0817 21:30:06.942743  102247 round_trippers.go:580]     Audit-Id: d9792e5e-23e4-44c3-83a7-918447d47b68
	I0817 21:30:06.942871  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"344","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0817 21:30:07.040838  102247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0817 21:30:07.143913  102247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0817 21:30:07.443584  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:07.443608  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:07.443620  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:07.443629  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:07.446503  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:07.446525  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:07.446536  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:07.446545  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:07.446558  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:07 GMT
	I0817 21:30:07.446568  102247 round_trippers.go:580]     Audit-Id: d8b37790-bb51-4fb2-9c63-de75b84b242b
	I0817 21:30:07.446581  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:07.446593  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:07.446934  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"344","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0817 21:30:07.551121  102247 command_runner.go:130] > configmap/coredns replaced
	I0817 21:30:07.551153  102247 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0817 21:30:07.728502  102247 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0817 21:30:07.930294  102247 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0817 21:30:07.936987  102247 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0817 21:30:07.943075  102247 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0817 21:30:07.944210  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:07.944234  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:07.944245  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:07.944255  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:07.946084  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:07.946102  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:07.946109  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:07 GMT
	I0817 21:30:07.946117  102247 round_trippers.go:580]     Audit-Id: 1eb4b5e3-c131-42aa-8556-95e1c5f06d97
	I0817 21:30:07.946125  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:07.946145  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:07.946154  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:07.946166  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:07.946290  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"344","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0817 21:30:07.949280  102247 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0817 21:30:07.957083  102247 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0817 21:30:07.963761  102247 command_runner.go:130] > pod/storage-provisioner created
	I0817 21:30:07.971314  102247 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0817 21:30:07.972642  102247 addons.go:502] enable addons completed in 1.23185298s: enabled=[default-storageclass storage-provisioner]
	I0817 21:30:08.444250  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:08.444273  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.444281  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.444288  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.446951  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:08.446970  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.446977  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.446983  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.446989  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.446994  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.447000  102247 round_trippers.go:580]     Audit-Id: 4f3a98a8-d5da-4ff5-8352-8ead4ee60b32
	I0817 21:30:08.447005  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.447110  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"344","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0817 21:30:08.943641  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:08.943660  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.943669  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.943675  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.945876  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:08.945915  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.945926  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.945933  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.945942  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.945950  102247 round_trippers.go:580]     Audit-Id: c644ce97-3f1d-4e48-90e2-0661fc0d89aa
	I0817 21:30:08.945963  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.945972  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.946087  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:08.946412  102247 node_ready.go:49] node "multinode-938028" has status "Ready":"True"
	I0817 21:30:08.946428  102247 node_ready.go:38] duration metric: took 2.009360004s waiting for node "multinode-938028" to be "Ready" ...
	I0817 21:30:08.946437  102247 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:30:08.946511  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:08.946524  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.946534  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.946543  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.949662  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:08.949685  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.949694  102247 round_trippers.go:580]     Audit-Id: 1a12cb0c-a50f-479e-b57f-8c936fcbd7df
	I0817 21:30:08.949704  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.949714  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.949725  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.949737  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.949750  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.950139  102247 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"428","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55051 chars]
	I0817 21:30:08.953131  102247 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:08.953193  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-klmz7
	I0817 21:30:08.953201  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.953208  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.953215  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.955327  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:08.955342  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.955349  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.955354  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.955363  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.955374  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.955387  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.955399  102247 round_trippers.go:580]     Audit-Id: cd6914ee-bbee-482f-a607-9335af9629b1
	I0817 21:30:08.955504  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"428","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0817 21:30:08.956010  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:08.956025  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.956037  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.956047  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.957654  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:08.957672  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.957683  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.957693  102247 round_trippers.go:580]     Audit-Id: c10edb79-3662-41b2-b75a-811ea1fc07ba
	I0817 21:30:08.957702  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.957711  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.957723  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.957735  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.957845  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:08.958186  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-klmz7
	I0817 21:30:08.958198  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.958205  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.958211  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.959903  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:08.959917  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.959923  102247 round_trippers.go:580]     Audit-Id: 34cd8708-9bff-4c50-9b58-2eaf8f58f02b
	I0817 21:30:08.959929  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.959937  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.959946  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.959955  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.959964  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.960060  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"428","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0817 21:30:08.960399  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:08.960411  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:08.960418  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:08.960425  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:08.961957  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:08.961984  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:08.961994  102247 round_trippers.go:580]     Audit-Id: 3a239509-635b-40aa-af02-46fd9ac02f91
	I0817 21:30:08.962003  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:08.962011  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:08.962020  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:08.962028  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:08.962038  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:08 GMT
	I0817 21:30:08.962237  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:09.463284  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-klmz7
	I0817 21:30:09.463305  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.463314  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.463320  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.465665  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:09.465690  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.465701  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.465710  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.465718  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.465739  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.465752  102247 round_trippers.go:580]     Audit-Id: 669da601-2094-45c4-a622-eb71992532e9
	I0817 21:30:09.465764  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.466021  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"435","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0817 21:30:09.466525  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:09.466540  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.466547  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.466553  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.468517  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.468533  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.468539  102247 round_trippers.go:580]     Audit-Id: debd679c-5c15-4a11-ae24-082d5c4bbd93
	I0817 21:30:09.468545  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.468550  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.468555  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.468560  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.468565  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.468749  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:09.469087  102247 pod_ready.go:92] pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:09.469104  102247 pod_ready.go:81] duration metric: took 515.952257ms waiting for pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:09.469115  102247 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:09.469159  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:09.469167  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.469174  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.469180  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.471055  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.471075  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.471084  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.471093  102247 round_trippers.go:580]     Audit-Id: 7e7adcd4-b66b-456f-9c6b-106064740efd
	I0817 21:30:09.471102  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.471110  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.471121  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.471138  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.471248  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:09.471648  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:09.471660  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.471668  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.471673  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.473430  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.473444  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.473450  102247 round_trippers.go:580]     Audit-Id: eef05b7b-344c-4189-af80-699d15b9fea5
	I0817 21:30:09.473455  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.473461  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.473466  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.473471  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.473477  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.473621  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:09.473946  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:09.473958  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.473965  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.473971  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.475546  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.475563  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.475572  102247 round_trippers.go:580]     Audit-Id: c179b276-2f3f-49c0-b0c5-68494b42e6f4
	I0817 21:30:09.475580  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.475589  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.475598  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.475610  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.475620  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.475733  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:09.476069  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:09.476081  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.476092  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.476101  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.477659  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.477679  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.477690  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.477698  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.477707  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.477719  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.477736  102247 round_trippers.go:580]     Audit-Id: e9859207-93ac-4bdf-afc2-ed9c7357eb37
	I0817 21:30:09.477747  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.477875  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:09.978593  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:09.978611  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.978618  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.978625  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.982857  102247 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0817 21:30:09.982875  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.982883  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.982889  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.982897  102247 round_trippers.go:580]     Audit-Id: 9558813f-3745-41eb-9949-7224d2cd1ec4
	I0817 21:30:09.982907  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.982916  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.982943  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.983055  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:09.983442  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:09.983454  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:09.983461  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:09.983467  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:09.985183  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:09.985204  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:09.985215  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:09 GMT
	I0817 21:30:09.985224  102247 round_trippers.go:580]     Audit-Id: ef901928-b659-4fac-bfd3-9dde98ee7735
	I0817 21:30:09.985232  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:09.985249  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:09.985258  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:09.985267  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:09.985385  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:10.478993  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:10.479013  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:10.479021  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:10.479027  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:10.481190  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:10.481209  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:10.481216  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:10.481222  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:10.481229  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:10.481235  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:10 GMT
	I0817 21:30:10.481240  102247 round_trippers.go:580]     Audit-Id: 3026f4bd-6040-43a0-bff1-f18c989cc9c8
	I0817 21:30:10.481249  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:10.481355  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:10.481829  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:10.481843  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:10.481850  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:10.481856  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:10.483625  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:10.483645  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:10.483656  102247 round_trippers.go:580]     Audit-Id: b75c8f3b-565a-422f-8487-0f45e1b12e66
	I0817 21:30:10.483665  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:10.483674  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:10.483680  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:10.483691  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:10.483697  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:10 GMT
	I0817 21:30:10.483825  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:10.978394  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:10.978415  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:10.978427  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:10.978437  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:10.980595  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:10.980612  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:10.980619  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:10.980625  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:10.980630  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:10 GMT
	I0817 21:30:10.980635  102247 round_trippers.go:580]     Audit-Id: b41234fd-d5d5-4c53-82cd-df6fa28f9071
	I0817 21:30:10.980641  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:10.980648  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:10.980745  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:10.981105  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:10.981116  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:10.981123  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:10.981129  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:10.982899  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:10.982918  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:10.982925  102247 round_trippers.go:580]     Audit-Id: 980dea96-0859-4093-b405-3ec29e1d2549
	I0817 21:30:10.982931  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:10.982937  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:10.982945  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:10.982953  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:10.982968  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:10 GMT
	I0817 21:30:10.983153  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:11.478650  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:11.478682  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:11.478690  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:11.478696  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:11.481069  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:11.481098  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:11.481109  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:11 GMT
	I0817 21:30:11.481120  102247 round_trippers.go:580]     Audit-Id: 4f931190-09ab-4b5a-8265-4486ab1116b3
	I0817 21:30:11.481131  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:11.481138  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:11.481149  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:11.481155  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:11.481267  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:11.481650  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:11.481664  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:11.481672  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:11.481679  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:11.483484  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:11.483501  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:11.483510  102247 round_trippers.go:580]     Audit-Id: 468efda8-562d-48fd-bf3a-abea8d313f87
	I0817 21:30:11.483520  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:11.483535  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:11.483545  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:11.483557  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:11.483567  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:11 GMT
	I0817 21:30:11.483687  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:11.484012  102247 pod_ready.go:102] pod "etcd-multinode-938028" in "kube-system" namespace has status "Ready":"False"
	I0817 21:30:11.979261  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:11.979281  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:11.979289  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:11.979297  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:11.981683  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:11.981706  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:11.981715  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:11 GMT
	I0817 21:30:11.981722  102247 round_trippers.go:580]     Audit-Id: 0eb23fc7-59b8-4484-b154-9e35cae01609
	I0817 21:30:11.981730  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:11.981739  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:11.981746  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:11.981753  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:11.981870  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:11.982379  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:11.982397  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:11.982407  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:11.982415  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:11.984328  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:11.984344  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:11.984351  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:11.984357  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:11.984362  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:11.984367  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:11.984374  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:11 GMT
	I0817 21:30:11.984382  102247 round_trippers.go:580]     Audit-Id: 22ca0ee3-98cf-424c-8213-f8a430d3bc77
	I0817 21:30:11.984537  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:12.479123  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:12.479145  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:12.479154  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:12.479163  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:12.481516  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:12.481536  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:12.481545  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:12.481550  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:12.481556  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:12.481561  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:12 GMT
	I0817 21:30:12.481567  102247 round_trippers.go:580]     Audit-Id: ef937ce2-a4de-4728-ba24-a8216508ba4d
	I0817 21:30:12.481572  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:12.481667  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:12.482078  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:12.482092  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:12.482099  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:12.482105  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:12.484010  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:12.484030  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:12.484037  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:12.484046  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:12.484056  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:12.484069  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:12 GMT
	I0817 21:30:12.484078  102247 round_trippers.go:580]     Audit-Id: ba9dc9eb-1666-4062-9471-f12e538b92b4
	I0817 21:30:12.484084  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:12.484180  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:12.978689  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:12.978711  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:12.978719  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:12.978726  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:12.980990  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:12.981014  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:12.981024  102247 round_trippers.go:580]     Audit-Id: ad8617c8-2f26-4631-873c-b0c679273633
	I0817 21:30:12.981034  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:12.981043  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:12.981048  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:12.981054  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:12.981063  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:12 GMT
	I0817 21:30:12.981162  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:12.981697  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:12.981714  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:12.981721  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:12.981727  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:12.983645  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:12.983668  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:12.983677  102247 round_trippers.go:580]     Audit-Id: 24d8ca62-92a5-469c-bab9-fd865a1b9014
	I0817 21:30:12.983684  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:12.983690  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:12.983696  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:12.983702  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:12.983710  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:12 GMT
	I0817 21:30:12.983838  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:13.479123  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:13.479143  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:13.479151  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:13.479157  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:13.481415  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:13.481439  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:13.481449  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:13.481454  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:13.481460  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:13.481466  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:13 GMT
	I0817 21:30:13.481471  102247 round_trippers.go:580]     Audit-Id: 79927571-cc04-489d-bf4b-b90019a1c9e5
	I0817 21:30:13.481480  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:13.481628  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:13.482022  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:13.482033  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:13.482040  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:13.482050  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:13.483907  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:13.483925  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:13.483935  102247 round_trippers.go:580]     Audit-Id: 62b0a023-8d47-4f4c-bf0b-ce143a90d47c
	I0817 21:30:13.483943  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:13.483951  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:13.483960  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:13.483974  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:13.483982  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:13 GMT
	I0817 21:30:13.484085  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:13.484411  102247 pod_ready.go:102] pod "etcd-multinode-938028" in "kube-system" namespace has status "Ready":"False"
	I0817 21:30:13.978638  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:13.978657  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:13.978666  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:13.978672  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:13.981287  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:13.981310  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:13.981320  102247 round_trippers.go:580]     Audit-Id: 02cd7cb5-7bf7-447b-b0af-46ad4a065593
	I0817 21:30:13.981330  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:13.981338  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:13.981346  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:13.981358  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:13.981366  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:13 GMT
	I0817 21:30:13.981470  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"363","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6058 chars]
	I0817 21:30:13.981827  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:13.981837  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:13.981844  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:13.981850  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:13.983595  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:13.983611  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:13.983617  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:13.983623  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:13.983628  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:13.983635  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:13 GMT
	I0817 21:30:13.983640  102247 round_trippers.go:580]     Audit-Id: a6657431-0b59-4851-afe8-539efce9799c
	I0817 21:30:13.983645  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:13.983772  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.479140  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:14.479160  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.479168  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.479174  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.481557  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:14.481579  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.481586  102247 round_trippers.go:580]     Audit-Id: eb94ba8d-0391-48e5-8898-727fe9cc5046
	I0817 21:30:14.481593  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.481598  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.481603  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.481609  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.481614  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.481694  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"452","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0817 21:30:14.482117  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.482131  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.482139  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.482145  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.484013  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.484035  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.484045  102247 round_trippers.go:580]     Audit-Id: 2e43d25b-d246-4e9a-9d4f-de15a6bf60c3
	I0817 21:30:14.484053  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.484061  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.484073  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.484083  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.484095  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.484191  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.484473  102247 pod_ready.go:92] pod "etcd-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:14.484486  102247 pod_ready.go:81] duration metric: took 5.015365365s waiting for pod "etcd-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.484496  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.484533  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-938028
	I0817 21:30:14.484541  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.484547  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.484554  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.486361  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.486383  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.486393  102247 round_trippers.go:580]     Audit-Id: b371f26c-1051-4e6b-9ce7-1c89854eefca
	I0817 21:30:14.486402  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.486417  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.486434  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.486443  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.486455  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.486570  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-938028","namespace":"kube-system","uid":"6dac9864-4745-4595-9cc1-a8ce957c247c","resourceVersion":"453","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d26f6070344b4e89652ceba8dd748820","kubernetes.io/config.mirror":"d26f6070344b4e89652ceba8dd748820","kubernetes.io/config.seen":"2023-08-17T21:29:53.969209809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0817 21:30:14.487002  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.487017  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.487024  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.487030  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.488597  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.488619  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.488629  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.488638  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.488647  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.488664  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.488671  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.488676  102247 round_trippers.go:580]     Audit-Id: bda9af7d-e0e8-4ad2-ac3e-6352d5b581e0
	I0817 21:30:14.488769  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.489027  102247 pod_ready.go:92] pod "kube-apiserver-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:14.489039  102247 pod_ready.go:81] duration metric: took 4.53824ms waiting for pod "kube-apiserver-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.489047  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.489084  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-938028
	I0817 21:30:14.489092  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.489099  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.489105  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.491024  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.491041  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.491048  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.491054  102247 round_trippers.go:580]     Audit-Id: 2f6a858b-4087-4228-9870-2add35ceeec4
	I0817 21:30:14.491062  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.491071  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.491079  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.491088  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.491197  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-938028","namespace":"kube-system","uid":"4089bb0e-1099-40d1-9df4-68943ea6fb68","resourceVersion":"455","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b5f752a96b068e4da65f4bf187b99598","kubernetes.io/config.mirror":"b5f752a96b068e4da65f4bf187b99598","kubernetes.io/config.seen":"2023-08-17T21:29:53.969211526Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0817 21:30:14.491537  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.491548  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.491555  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.491562  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.493073  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.493091  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.493101  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.493113  102247 round_trippers.go:580]     Audit-Id: 5ed57702-e044-4a9f-b1e2-8507cfae7376
	I0817 21:30:14.493125  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.493137  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.493149  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.493162  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.493251  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.493496  102247 pod_ready.go:92] pod "kube-controller-manager-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:14.493508  102247 pod_ready.go:81] duration metric: took 4.455227ms waiting for pod "kube-controller-manager-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.493516  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf5b5" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.493567  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bf5b5
	I0817 21:30:14.493575  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.493581  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.493587  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.495144  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:14.495158  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.495165  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.495170  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.495176  102247 round_trippers.go:580]     Audit-Id: 624f3bf9-7745-4216-b6f2-5ea59d7f7e13
	I0817 21:30:14.495182  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.495191  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.495197  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.495319  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bf5b5","generateName":"kube-proxy-","namespace":"kube-system","uid":"39b3791d-3973-4cb6-ac55-eecde2f2fd0f","resourceVersion":"419","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"831ddc65-acb9-4009-a551-276dd84b70e8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831ddc65-acb9-4009-a551-276dd84b70e8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0817 21:30:14.544022  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.544042  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.544051  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.544058  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.546261  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:14.546284  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.546291  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.546297  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.546302  102247 round_trippers.go:580]     Audit-Id: 2e56b57a-5f99-4c2b-aba6-059f8135b317
	I0817 21:30:14.546308  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.546314  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.546319  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.546477  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.546789  102247 pod_ready.go:92] pod "kube-proxy-bf5b5" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:14.546805  102247 pod_ready.go:81] duration metric: took 53.280452ms waiting for pod "kube-proxy-bf5b5" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.546814  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.744231  102247 request.go:628] Waited for 197.354903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-938028
	I0817 21:30:14.744293  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-938028
	I0817 21:30:14.744298  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.744310  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.744319  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.746571  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:14.746594  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.746604  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.746613  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.746622  102247 round_trippers.go:580]     Audit-Id: a55f6d36-a891-4f66-a158-ed93c48f34c4
	I0817 21:30:14.746631  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.746639  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.746647  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.746747  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-938028","namespace":"kube-system","uid":"ec4e68df-918c-4e2c-b757-5117e84954d2","resourceVersion":"454","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0f42571bbc769848748479d22483ba61","kubernetes.io/config.mirror":"0f42571bbc769848748479d22483ba61","kubernetes.io/config.seen":"2023-08-17T21:29:53.969212656Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0817 21:30:14.944355  102247 request.go:628] Waited for 197.238161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.944399  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:14.944404  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.944411  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.944418  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.946711  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:14.946739  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.946746  102247 round_trippers.go:580]     Audit-Id: 8228836f-6a77-4ea8-b828-ea54149c4682
	I0817 21:30:14.946752  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.946758  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.946763  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.946769  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.946778  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.946924  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:14.947244  102247 pod_ready.go:92] pod "kube-scheduler-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:14.947258  102247 pod_ready.go:81] duration metric: took 400.438173ms waiting for pod "kube-scheduler-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:14.947268  102247 pod_ready.go:38] duration metric: took 6.000821912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:30:14.947283  102247 api_server.go:52] waiting for apiserver process to appear ...
	I0817 21:30:14.947329  102247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:30:14.956616  102247 command_runner.go:130] > 1425
	I0817 21:30:14.957309  102247 api_server.go:72] duration metric: took 8.1903474s to wait for apiserver process to appear ...
	I0817 21:30:14.957323  102247 api_server.go:88] waiting for apiserver healthz status ...
	I0817 21:30:14.957336  102247 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 21:30:14.961828  102247 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 21:30:14.961881  102247 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0817 21:30:14.961889  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:14.961911  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:14.961931  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:14.962827  102247 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0817 21:30:14.962839  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:14.962845  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:14.962851  102247 round_trippers.go:580]     Content-Length: 263
	I0817 21:30:14.962856  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:14 GMT
	I0817 21:30:14.962862  102247 round_trippers.go:580]     Audit-Id: 31a0512f-c13c-46c4-9b80-d8c362947786
	I0817 21:30:14.962867  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:14.962873  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:14.962878  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:14.962895  102247 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0817 21:30:14.962966  102247 api_server.go:141] control plane version: v1.27.4
	I0817 21:30:14.962978  102247 api_server.go:131] duration metric: took 5.650884ms to wait for apiserver health ...
	I0817 21:30:14.962985  102247 system_pods.go:43] waiting for kube-system pods to appear ...
	I0817 21:30:15.144410  102247 request.go:628] Waited for 181.344595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:15.144484  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:15.144490  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:15.144501  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:15.144517  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:15.147987  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:15.148008  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:15.148018  102247 round_trippers.go:580]     Audit-Id: 76437069-bf51-4aaa-899d-f42ca1f04d4f
	I0817 21:30:15.148027  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:15.148041  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:15.148050  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:15.148063  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:15.148074  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:15 GMT
	I0817 21:30:15.148614  102247 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"435","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0817 21:30:15.150320  102247 system_pods.go:59] 8 kube-system pods found
	I0817 21:30:15.150340  102247 system_pods.go:61] "coredns-5d78c9869d-klmz7" [9cb10fd3-6480-47a0-8698-0573bb8dbfd1] Running
	I0817 21:30:15.150345  102247 system_pods.go:61] "etcd-multinode-938028" [8467d526-5134-4571-bd8b-37cba78ca9a6] Running
	I0817 21:30:15.150350  102247 system_pods.go:61] "kindnet-qm6gj" [5b0d01f7-ea47-41a7-9b63-9ca0e667333d] Running
	I0817 21:30:15.150355  102247 system_pods.go:61] "kube-apiserver-multinode-938028" [6dac9864-4745-4595-9cc1-a8ce957c247c] Running
	I0817 21:30:15.150362  102247 system_pods.go:61] "kube-controller-manager-multinode-938028" [4089bb0e-1099-40d1-9df4-68943ea6fb68] Running
	I0817 21:30:15.150366  102247 system_pods.go:61] "kube-proxy-bf5b5" [39b3791d-3973-4cb6-ac55-eecde2f2fd0f] Running
	I0817 21:30:15.150370  102247 system_pods.go:61] "kube-scheduler-multinode-938028" [ec4e68df-918c-4e2c-b757-5117e84954d2] Running
	I0817 21:30:15.150374  102247 system_pods.go:61] "storage-provisioner" [2717746b-904a-44d5-82f6-301899f718aa] Running
	I0817 21:30:15.150381  102247 system_pods.go:74] duration metric: took 187.387259ms to wait for pod list to return data ...
	I0817 21:30:15.150388  102247 default_sa.go:34] waiting for default service account to be created ...
	I0817 21:30:15.343725  102247 request.go:628] Waited for 193.277096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:30:15.343788  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0817 21:30:15.343793  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:15.343800  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:15.343807  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:15.345970  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:15.345989  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:15.345996  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:15.346002  102247 round_trippers.go:580]     Content-Length: 261
	I0817 21:30:15.346008  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:15 GMT
	I0817 21:30:15.346013  102247 round_trippers.go:580]     Audit-Id: ecb99c28-9486-49bc-b8a7-99f803032f47
	I0817 21:30:15.346019  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:15.346024  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:15.346029  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:15.346051  102247 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e437741a-5db3-4e4b-aa97-edfc25a10bf6","resourceVersion":"369","creationTimestamp":"2023-08-17T21:30:06Z"}}]}
	I0817 21:30:15.346226  102247 default_sa.go:45] found service account: "default"
	I0817 21:30:15.346240  102247 default_sa.go:55] duration metric: took 195.847437ms for default service account to be created ...
	I0817 21:30:15.346247  102247 system_pods.go:116] waiting for k8s-apps to be running ...
	I0817 21:30:15.544654  102247 request.go:628] Waited for 198.352852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:15.544718  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:15.544723  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:15.544730  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:15.544738  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:15.547713  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:15.547740  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:15.547752  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:15.547761  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:15 GMT
	I0817 21:30:15.547771  102247 round_trippers.go:580]     Audit-Id: 7c98ffda-eec2-45e0-81c0-2200e9192372
	I0817 21:30:15.547779  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:15.547789  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:15.547798  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:15.548265  102247 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"435","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0817 21:30:15.550009  102247 system_pods.go:86] 8 kube-system pods found
	I0817 21:30:15.550029  102247 system_pods.go:89] "coredns-5d78c9869d-klmz7" [9cb10fd3-6480-47a0-8698-0573bb8dbfd1] Running
	I0817 21:30:15.550034  102247 system_pods.go:89] "etcd-multinode-938028" [8467d526-5134-4571-bd8b-37cba78ca9a6] Running
	I0817 21:30:15.550038  102247 system_pods.go:89] "kindnet-qm6gj" [5b0d01f7-ea47-41a7-9b63-9ca0e667333d] Running
	I0817 21:30:15.550043  102247 system_pods.go:89] "kube-apiserver-multinode-938028" [6dac9864-4745-4595-9cc1-a8ce957c247c] Running
	I0817 21:30:15.550054  102247 system_pods.go:89] "kube-controller-manager-multinode-938028" [4089bb0e-1099-40d1-9df4-68943ea6fb68] Running
	I0817 21:30:15.550061  102247 system_pods.go:89] "kube-proxy-bf5b5" [39b3791d-3973-4cb6-ac55-eecde2f2fd0f] Running
	I0817 21:30:15.550072  102247 system_pods.go:89] "kube-scheduler-multinode-938028" [ec4e68df-918c-4e2c-b757-5117e84954d2] Running
	I0817 21:30:15.550078  102247 system_pods.go:89] "storage-provisioner" [2717746b-904a-44d5-82f6-301899f718aa] Running
	I0817 21:30:15.550090  102247 system_pods.go:126] duration metric: took 203.838199ms to wait for k8s-apps to be running ...
	I0817 21:30:15.550100  102247 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:30:15.550150  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:30:15.560586  102247 system_svc.go:56] duration metric: took 10.479399ms WaitForService to wait for kubelet.
	I0817 21:30:15.560607  102247 kubeadm.go:581] duration metric: took 8.793649053s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:30:15.560623  102247 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:30:15.744006  102247 request.go:628] Waited for 183.313708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0817 21:30:15.744060  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0817 21:30:15.744067  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:15.744077  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:15.744086  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:15.746397  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:15.746416  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:15.746428  102247 round_trippers.go:580]     Audit-Id: af5813a9-2aa9-463a-b385-4aaabe965e39
	I0817 21:30:15.746434  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:15.746440  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:15.746445  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:15.746451  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:15.746456  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:15 GMT
	I0817 21:30:15.746562  102247 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0817 21:30:15.746923  102247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0817 21:30:15.746940  102247 node_conditions.go:123] node cpu capacity is 8
	I0817 21:30:15.746949  102247 node_conditions.go:105] duration metric: took 186.321551ms to run NodePressure ...
	I0817 21:30:15.746958  102247 start.go:228] waiting for startup goroutines ...
	I0817 21:30:15.746972  102247 start.go:233] waiting for cluster config update ...
	I0817 21:30:15.746980  102247 start.go:242] writing updated cluster config ...
	I0817 21:30:15.749536  102247 out.go:177] 
	I0817 21:30:15.751188  102247 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:30:15.751257  102247 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json ...
	I0817 21:30:15.753134  102247 out.go:177] * Starting worker node multinode-938028-m02 in cluster multinode-938028
	I0817 21:30:15.754377  102247 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:30:15.755946  102247 out.go:177] * Pulling base image ...
	I0817 21:30:15.757786  102247 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:30:15.757800  102247 cache.go:57] Caching tarball of preloaded images
	I0817 21:30:15.757842  102247 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:30:15.757906  102247 preload.go:174] Found /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0817 21:30:15.757918  102247 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:30:15.758004  102247 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json ...
	I0817 21:30:15.774845  102247 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:30:15.774869  102247 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0817 21:30:15.774886  102247 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:30:15.774913  102247 start.go:365] acquiring machines lock for multinode-938028-m02: {Name:mkf03d32b8cfcee7aa2f3b077e353bf4e6c5c14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:30:15.775001  102247 start.go:369] acquired machines lock for "multinode-938028-m02" in 72.253µs
	I0817 21:30:15.775022  102247 start.go:93] Provisioning new machine with config: &{Name:multinode-938028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:30:15.775093  102247 start.go:125] createHost starting for "m02" (driver="docker")
	I0817 21:30:15.777955  102247 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0817 21:30:15.778040  102247 start.go:159] libmachine.API.Create for "multinode-938028" (driver="docker")
	I0817 21:30:15.778064  102247 client.go:168] LocalClient.Create starting
	I0817 21:30:15.778119  102247 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem
	I0817 21:30:15.778146  102247 main.go:141] libmachine: Decoding PEM data...
	I0817 21:30:15.778161  102247 main.go:141] libmachine: Parsing certificate...
	I0817 21:30:15.778209  102247 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem
	I0817 21:30:15.778226  102247 main.go:141] libmachine: Decoding PEM data...
	I0817 21:30:15.778238  102247 main.go:141] libmachine: Parsing certificate...
	I0817 21:30:15.778407  102247 cli_runner.go:164] Run: docker network inspect multinode-938028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:30:15.793642  102247 network_create.go:76] Found existing network {name:multinode-938028 subnet:0xc0014f0c60 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0817 21:30:15.793675  102247 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-938028-m02" container
	I0817 21:30:15.793723  102247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0817 21:30:15.809651  102247 cli_runner.go:164] Run: docker volume create multinode-938028-m02 --label name.minikube.sigs.k8s.io=multinode-938028-m02 --label created_by.minikube.sigs.k8s.io=true
	I0817 21:30:15.825878  102247 oci.go:103] Successfully created a docker volume multinode-938028-m02
	I0817 21:30:15.825966  102247 cli_runner.go:164] Run: docker run --rm --name multinode-938028-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-938028-m02 --entrypoint /usr/bin/test -v multinode-938028-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0817 21:30:16.337640  102247 oci.go:107] Successfully prepared a docker volume multinode-938028-m02
	I0817 21:30:16.337699  102247 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:30:16.337721  102247 kic.go:190] Starting extracting preloaded images to volume ...
	I0817 21:30:16.337803  102247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-938028-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0817 21:30:21.113151  102247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-938028-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.775302812s)
	I0817 21:30:21.113186  102247 kic.go:199] duration metric: took 4.775463 seconds to extract preloaded images to volume
	W0817 21:30:21.113321  102247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0817 21:30:21.113429  102247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0817 21:30:21.163085  102247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-938028-m02 --name multinode-938028-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-938028-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-938028-m02 --network multinode-938028 --ip 192.168.58.3 --volume multinode-938028-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:30:21.462629  102247 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Running}}
	I0817 21:30:21.479725  102247 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Status}}
	I0817 21:30:21.497338  102247 cli_runner.go:164] Run: docker exec multinode-938028-m02 stat /var/lib/dpkg/alternatives/iptables
	I0817 21:30:21.549837  102247 oci.go:144] the created container "multinode-938028-m02" has a running status.
	I0817 21:30:21.549874  102247 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa...
	I0817 21:30:21.742163  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0817 21:30:21.742202  102247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0817 21:30:21.763354  102247 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Status}}
	I0817 21:30:21.778824  102247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0817 21:30:21.778843  102247 kic_runner.go:114] Args: [docker exec --privileged multinode-938028-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0817 21:30:21.834897  102247 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Status}}
	I0817 21:30:21.855679  102247 machine.go:88] provisioning docker machine ...
	I0817 21:30:21.855718  102247 ubuntu.go:169] provisioning hostname "multinode-938028-m02"
	I0817 21:30:21.855786  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:21.883204  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:30:21.883659  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0817 21:30:21.883680  102247 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-938028-m02 && echo "multinode-938028-m02" | sudo tee /etc/hostname
	I0817 21:30:22.116041  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-938028-m02
	
	I0817 21:30:22.116125  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:22.133016  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:30:22.133401  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0817 21:30:22.133420  102247 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-938028-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-938028-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-938028-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:30:22.261543  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:30:22.261571  102247 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:30:22.261590  102247 ubuntu.go:177] setting up certificates
	I0817 21:30:22.261599  102247 provision.go:83] configureAuth start
	I0817 21:30:22.261653  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028-m02
	I0817 21:30:22.277334  102247 provision.go:138] copyHostCerts
	I0817 21:30:22.277370  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:30:22.277402  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem, removing ...
	I0817 21:30:22.277413  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:30:22.277478  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:30:22.277559  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:30:22.277583  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem, removing ...
	I0817 21:30:22.277593  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:30:22.277627  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:30:22.277691  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:30:22.277713  102247 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem, removing ...
	I0817 21:30:22.277722  102247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:30:22.277762  102247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:30:22.277828  102247 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.multinode-938028-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-938028-m02]
	I0817 21:30:22.392742  102247 provision.go:172] copyRemoteCerts
	I0817 21:30:22.392811  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:30:22.392856  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:22.410322  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:30:22.501781  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0817 21:30:22.501857  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:30:22.521733  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0817 21:30:22.521782  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0817 21:30:22.541255  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0817 21:30:22.541319  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:30:22.561189  102247 provision.go:86] duration metric: configureAuth took 299.580653ms
	I0817 21:30:22.561213  102247 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:30:22.561387  102247 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:30:22.561479  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:22.577634  102247 main.go:141] libmachine: Using SSH client type: native
	I0817 21:30:22.578215  102247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0817 21:30:22.578240  102247 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:30:22.784117  102247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:30:22.784142  102247 machine.go:91] provisioned docker machine in 928.441949ms
	I0817 21:30:22.784151  102247 client.go:171] LocalClient.Create took 7.006081362s
	I0817 21:30:22.784168  102247 start.go:167] duration metric: libmachine.API.Create for "multinode-938028" took 7.00612676s
	I0817 21:30:22.784174  102247 start.go:300] post-start starting for "multinode-938028-m02" (driver="docker")
	I0817 21:30:22.784183  102247 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:30:22.784233  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:30:22.784270  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:22.800044  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:30:22.890133  102247 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:30:22.892872  102247 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0817 21:30:22.892889  102247 command_runner.go:130] > NAME="Ubuntu"
	I0817 21:30:22.892895  102247 command_runner.go:130] > VERSION_ID="22.04"
	I0817 21:30:22.892900  102247 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0817 21:30:22.892911  102247 command_runner.go:130] > VERSION_CODENAME=jammy
	I0817 21:30:22.892915  102247 command_runner.go:130] > ID=ubuntu
	I0817 21:30:22.892919  102247 command_runner.go:130] > ID_LIKE=debian
	I0817 21:30:22.892923  102247 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0817 21:30:22.892928  102247 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0817 21:30:22.892934  102247 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0817 21:30:22.892940  102247 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0817 21:30:22.892944  102247 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0817 21:30:22.892998  102247 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:30:22.893021  102247 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:30:22.893032  102247 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:30:22.893040  102247 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0817 21:30:22.893047  102247 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:30:22.893093  102247 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:30:22.893158  102247 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> 175042.pem in /etc/ssl/certs
	I0817 21:30:22.893167  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /etc/ssl/certs/175042.pem
	I0817 21:30:22.893237  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:30:22.900390  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:30:22.920135  102247 start.go:303] post-start completed in 135.949895ms
	I0817 21:30:22.920472  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028-m02
	I0817 21:30:22.936289  102247 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/config.json ...
	I0817 21:30:22.936568  102247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:30:22.936618  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:22.951190  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:30:23.042379  102247 command_runner.go:130] > 19%!
	(MISSING)I0817 21:30:23.042447  102247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:30:23.046100  102247 command_runner.go:130] > 237G
	I0817 21:30:23.046256  102247 start.go:128] duration metric: createHost completed in 7.27115393s
	I0817 21:30:23.046271  102247 start.go:83] releasing machines lock for "multinode-938028-m02", held for 7.271259598s
	I0817 21:30:23.046331  102247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028-m02
	I0817 21:30:23.064830  102247 out.go:177] * Found network options:
	I0817 21:30:23.066464  102247 out.go:177]   - NO_PROXY=192.168.58.2
	W0817 21:30:23.067982  102247 proxy.go:119] fail to check proxy env: Error ip not in block
	W0817 21:30:23.068014  102247 proxy.go:119] fail to check proxy env: Error ip not in block
	I0817 21:30:23.068083  102247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:30:23.068127  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:23.068170  102247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:30:23.068225  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:30:23.083832  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:30:23.084520  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:30:23.262837  102247 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0817 21:30:23.301939  102247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:30:23.305909  102247 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0817 21:30:23.305938  102247 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0817 21:30:23.305948  102247 command_runner.go:130] > Device: b0h/176d	Inode: 834778      Links: 1
	I0817 21:30:23.305959  102247 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:30:23.305974  102247 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0817 21:30:23.305985  102247 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0817 21:30:23.305994  102247 command_runner.go:130] > Change: 2023-08-17 21:10:53.572447170 +0000
	I0817 21:30:23.305999  102247 command_runner.go:130] >  Birth: 2023-08-17 21:10:53.572447170 +0000
	I0817 21:30:23.306060  102247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:30:23.322855  102247 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:30:23.322923  102247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:30:23.348445  102247 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0817 21:30:23.348484  102247 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0817 21:30:23.348490  102247 start.go:466] detecting cgroup driver to use...
	I0817 21:30:23.348516  102247 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:30:23.348550  102247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:30:23.361541  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:30:23.370699  102247 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:30:23.370747  102247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:30:23.382117  102247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:30:23.393644  102247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0817 21:30:23.473698  102247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:30:23.486035  102247 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0817 21:30:23.557107  102247 docker.go:212] disabling docker service ...
	I0817 21:30:23.557170  102247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:30:23.573704  102247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:30:23.583615  102247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:30:23.594018  102247 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0817 21:30:23.658591  102247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:30:23.737565  102247 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0817 21:30:23.737640  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:30:23.747315  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:30:23.760171  102247 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0817 21:30:23.760972  102247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0817 21:30:23.761037  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:30:23.769113  102247 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0817 21:30:23.769174  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:30:23.777386  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:30:23.785345  102247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:30:23.793334  102247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0817 21:30:23.800721  102247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0817 21:30:23.807392  102247 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0817 21:30:23.807429  102247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0817 21:30:23.814134  102247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0817 21:30:23.883860  102247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0817 21:30:23.968572  102247 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0817 21:30:23.968638  102247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0817 21:30:23.971991  102247 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0817 21:30:23.972018  102247 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0817 21:30:23.972030  102247 command_runner.go:130] > Device: bah/186d	Inode: 186         Links: 1
	I0817 21:30:23.972042  102247 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:30:23.972052  102247 command_runner.go:130] > Access: 2023-08-17 21:30:23.956740795 +0000
	I0817 21:30:23.972067  102247 command_runner.go:130] > Modify: 2023-08-17 21:30:23.956740795 +0000
	I0817 21:30:23.972076  102247 command_runner.go:130] > Change: 2023-08-17 21:30:23.956740795 +0000
	I0817 21:30:23.972091  102247 command_runner.go:130] >  Birth: -
	I0817 21:30:23.972128  102247 start.go:534] Will wait 60s for crictl version
	I0817 21:30:23.972167  102247 ssh_runner.go:195] Run: which crictl
	I0817 21:30:23.975193  102247 command_runner.go:130] > /usr/bin/crictl
	I0817 21:30:23.975288  102247 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0817 21:30:24.005323  102247 command_runner.go:130] > Version:  0.1.0
	I0817 21:30:24.005345  102247 command_runner.go:130] > RuntimeName:  cri-o
	I0817 21:30:24.005350  102247 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0817 21:30:24.005355  102247 command_runner.go:130] > RuntimeApiVersion:  v1
	I0817 21:30:24.005372  102247 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0817 21:30:24.005438  102247 ssh_runner.go:195] Run: crio --version
	I0817 21:30:24.037115  102247 command_runner.go:130] > crio version 1.24.6
	I0817 21:30:24.037132  102247 command_runner.go:130] > Version:          1.24.6
	I0817 21:30:24.037139  102247 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0817 21:30:24.037143  102247 command_runner.go:130] > GitTreeState:     clean
	I0817 21:30:24.037149  102247 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0817 21:30:24.037156  102247 command_runner.go:130] > GoVersion:        go1.18.2
	I0817 21:30:24.037160  102247 command_runner.go:130] > Compiler:         gc
	I0817 21:30:24.037164  102247 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:30:24.037169  102247 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:30:24.037177  102247 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:30:24.037181  102247 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:30:24.037185  102247 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:30:24.037248  102247 ssh_runner.go:195] Run: crio --version
	I0817 21:30:24.067669  102247 command_runner.go:130] > crio version 1.24.6
	I0817 21:30:24.067692  102247 command_runner.go:130] > Version:          1.24.6
	I0817 21:30:24.067701  102247 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0817 21:30:24.067708  102247 command_runner.go:130] > GitTreeState:     clean
	I0817 21:30:24.067717  102247 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0817 21:30:24.067724  102247 command_runner.go:130] > GoVersion:        go1.18.2
	I0817 21:30:24.067730  102247 command_runner.go:130] > Compiler:         gc
	I0817 21:30:24.067737  102247 command_runner.go:130] > Platform:         linux/amd64
	I0817 21:30:24.067745  102247 command_runner.go:130] > Linkmode:         dynamic
	I0817 21:30:24.067763  102247 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0817 21:30:24.067773  102247 command_runner.go:130] > SeccompEnabled:   true
	I0817 21:30:24.067779  102247 command_runner.go:130] > AppArmorEnabled:  false
	I0817 21:30:24.071965  102247 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0817 21:30:24.073589  102247 out.go:177]   - env NO_PROXY=192.168.58.2
	I0817 21:30:24.075292  102247 cli_runner.go:164] Run: docker network inspect multinode-938028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0817 21:30:24.091030  102247 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0817 21:30:24.094346  102247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:30:24.103880  102247 certs.go:56] Setting up /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028 for IP: 192.168.58.3
	I0817 21:30:24.103905  102247 certs.go:190] acquiring lock for shared ca certs: {Name:mkccb042866dbfd72de305663f91f6bc6da7b7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:30:24.104016  102247 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key
	I0817 21:30:24.104051  102247 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key
	I0817 21:30:24.104066  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0817 21:30:24.104079  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0817 21:30:24.104093  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0817 21:30:24.104105  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0817 21:30:24.104155  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem (1338 bytes)
	W0817 21:30:24.104182  102247 certs.go:433] ignoring /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504_empty.pem, impossibly tiny 0 bytes
	I0817 21:30:24.104192  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem (1679 bytes)
	I0817 21:30:24.104214  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem (1078 bytes)
	I0817 21:30:24.104237  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem (1123 bytes)
	I0817 21:30:24.104258  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem (1679 bytes)
	I0817 21:30:24.104294  102247 certs.go:437] found cert: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:30:24.104319  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:30:24.104333  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem -> /usr/share/ca-certificates/17504.pem
	I0817 21:30:24.104345  102247 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> /usr/share/ca-certificates/175042.pem
	I0817 21:30:24.104636  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0817 21:30:24.125108  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0817 21:30:24.145436  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0817 21:30:24.164987  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0817 21:30:24.184924  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0817 21:30:24.204685  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/17504.pem --> /usr/share/ca-certificates/17504.pem (1338 bytes)
	I0817 21:30:24.223772  102247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /usr/share/ca-certificates/175042.pem (1708 bytes)
	I0817 21:30:24.243534  102247 ssh_runner.go:195] Run: openssl version
	I0817 21:30:24.248190  102247 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0817 21:30:24.248407  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0817 21:30:24.256291  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:30:24.259324  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:30:24.259359  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 17 21:11 /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:30:24.259386  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0817 21:30:24.265292  102247 command_runner.go:130] > b5213941
	I0817 21:30:24.265352  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0817 21:30:24.272946  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17504.pem && ln -fs /usr/share/ca-certificates/17504.pem /etc/ssl/certs/17504.pem"
	I0817 21:30:24.280737  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17504.pem
	I0817 21:30:24.283707  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 17 21:16 /usr/share/ca-certificates/17504.pem
	I0817 21:30:24.283763  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 17 21:16 /usr/share/ca-certificates/17504.pem
	I0817 21:30:24.283808  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17504.pem
	I0817 21:30:24.289501  102247 command_runner.go:130] > 51391683
	I0817 21:30:24.289690  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17504.pem /etc/ssl/certs/51391683.0"
	I0817 21:30:24.297191  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175042.pem && ln -fs /usr/share/ca-certificates/175042.pem /etc/ssl/certs/175042.pem"
	I0817 21:30:24.305248  102247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175042.pem
	I0817 21:30:24.308069  102247 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 17 21:16 /usr/share/ca-certificates/175042.pem
	I0817 21:30:24.308089  102247 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 17 21:16 /usr/share/ca-certificates/175042.pem
	I0817 21:30:24.308118  102247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175042.pem
	I0817 21:30:24.313677  102247 command_runner.go:130] > 3ec20f2e
	I0817 21:30:24.313939  102247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175042.pem /etc/ssl/certs/3ec20f2e.0"
	I0817 21:30:24.321535  102247 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0817 21:30:24.324263  102247 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:30:24.324329  102247 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0817 21:30:24.324402  102247 ssh_runner.go:195] Run: crio config
	I0817 21:30:24.357499  102247 command_runner.go:130] ! time="2023-08-17 21:30:24.357136886Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0817 21:30:24.357523  102247 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0817 21:30:24.362040  102247 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0817 21:30:24.362059  102247 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0817 21:30:24.362070  102247 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0817 21:30:24.362075  102247 command_runner.go:130] > #
	I0817 21:30:24.362087  102247 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0817 21:30:24.362102  102247 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0817 21:30:24.362116  102247 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0817 21:30:24.362134  102247 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0817 21:30:24.362143  102247 command_runner.go:130] > # reload'.
	I0817 21:30:24.362156  102247 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0817 21:30:24.362170  102247 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0817 21:30:24.362183  102247 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0817 21:30:24.362195  102247 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0817 21:30:24.362203  102247 command_runner.go:130] > [crio]
	I0817 21:30:24.362218  102247 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0817 21:30:24.362229  102247 command_runner.go:130] > # containers images, in this directory.
	I0817 21:30:24.362248  102247 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0817 21:30:24.362261  102247 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0817 21:30:24.362269  102247 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0817 21:30:24.362275  102247 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0817 21:30:24.362283  102247 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0817 21:30:24.362291  102247 command_runner.go:130] > # storage_driver = "vfs"
	I0817 21:30:24.362299  102247 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0817 21:30:24.362307  102247 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0817 21:30:24.362314  102247 command_runner.go:130] > # storage_option = [
	I0817 21:30:24.362317  102247 command_runner.go:130] > # ]
	I0817 21:30:24.362345  102247 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0817 21:30:24.362356  102247 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0817 21:30:24.362364  102247 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0817 21:30:24.362369  102247 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0817 21:30:24.362377  102247 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0817 21:30:24.362384  102247 command_runner.go:130] > # always happen on a node reboot
	I0817 21:30:24.362389  102247 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0817 21:30:24.362396  102247 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0817 21:30:24.362405  102247 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0817 21:30:24.362430  102247 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0817 21:30:24.362438  102247 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0817 21:30:24.362449  102247 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0817 21:30:24.362458  102247 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0817 21:30:24.362464  102247 command_runner.go:130] > # internal_wipe = true
	I0817 21:30:24.362470  102247 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0817 21:30:24.362478  102247 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0817 21:30:24.362486  102247 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0817 21:30:24.362494  102247 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0817 21:30:24.362502  102247 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0817 21:30:24.362508  102247 command_runner.go:130] > [crio.api]
	I0817 21:30:24.362516  102247 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0817 21:30:24.362529  102247 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0817 21:30:24.362537  102247 command_runner.go:130] > # IP address on which the stream server will listen.
	I0817 21:30:24.362543  102247 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0817 21:30:24.362550  102247 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0817 21:30:24.362558  102247 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0817 21:30:24.362562  102247 command_runner.go:130] > # stream_port = "0"
	I0817 21:30:24.362569  102247 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0817 21:30:24.362574  102247 command_runner.go:130] > # stream_enable_tls = false
	I0817 21:30:24.362582  102247 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0817 21:30:24.362589  102247 command_runner.go:130] > # stream_idle_timeout = ""
	I0817 21:30:24.362595  102247 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0817 21:30:24.362603  102247 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0817 21:30:24.362609  102247 command_runner.go:130] > # minutes.
	I0817 21:30:24.362615  102247 command_runner.go:130] > # stream_tls_cert = ""
	I0817 21:30:24.362620  102247 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0817 21:30:24.362631  102247 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0817 21:30:24.362636  102247 command_runner.go:130] > # stream_tls_key = ""
	I0817 21:30:24.362642  102247 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0817 21:30:24.362650  102247 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0817 21:30:24.362656  102247 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0817 21:30:24.362661  102247 command_runner.go:130] > # stream_tls_ca = ""
	I0817 21:30:24.362669  102247 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:30:24.362676  102247 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0817 21:30:24.362683  102247 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0817 21:30:24.362690  102247 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0817 21:30:24.362711  102247 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0817 21:30:24.362719  102247 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0817 21:30:24.362725  102247 command_runner.go:130] > [crio.runtime]
	I0817 21:30:24.362731  102247 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0817 21:30:24.362740  102247 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0817 21:30:24.362750  102247 command_runner.go:130] > # "nofile=1024:2048"
	I0817 21:30:24.362759  102247 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0817 21:30:24.362768  102247 command_runner.go:130] > # default_ulimits = [
	I0817 21:30:24.362779  102247 command_runner.go:130] > # ]
	I0817 21:30:24.362793  102247 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0817 21:30:24.362801  102247 command_runner.go:130] > # no_pivot = false
	I0817 21:30:24.362808  102247 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0817 21:30:24.362816  102247 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0817 21:30:24.362821  102247 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0817 21:30:24.362829  102247 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0817 21:30:24.362834  102247 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0817 21:30:24.362844  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:30:24.362850  102247 command_runner.go:130] > # conmon = ""
	I0817 21:30:24.362855  102247 command_runner.go:130] > # Cgroup setting for conmon
	I0817 21:30:24.362863  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0817 21:30:24.362869  102247 command_runner.go:130] > conmon_cgroup = "pod"
	I0817 21:30:24.362880  102247 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0817 21:30:24.362887  102247 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0817 21:30:24.362895  102247 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0817 21:30:24.362901  102247 command_runner.go:130] > # conmon_env = [
	I0817 21:30:24.362905  102247 command_runner.go:130] > # ]
	I0817 21:30:24.362915  102247 command_runner.go:130] > # Additional environment variables to set for all the
	I0817 21:30:24.362922  102247 command_runner.go:130] > # containers. These are overridden if set in the
	I0817 21:30:24.362928  102247 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0817 21:30:24.362934  102247 command_runner.go:130] > # default_env = [
	I0817 21:30:24.362937  102247 command_runner.go:130] > # ]
	I0817 21:30:24.362946  102247 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0817 21:30:24.362952  102247 command_runner.go:130] > # selinux = false
	I0817 21:30:24.362958  102247 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0817 21:30:24.362965  102247 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0817 21:30:24.362973  102247 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0817 21:30:24.362979  102247 command_runner.go:130] > # seccomp_profile = ""
	I0817 21:30:24.362985  102247 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0817 21:30:24.362997  102247 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0817 21:30:24.363005  102247 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0817 21:30:24.363009  102247 command_runner.go:130] > # which might increase security.
	I0817 21:30:24.363016  102247 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0817 21:30:24.363022  102247 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0817 21:30:24.363030  102247 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0817 21:30:24.363040  102247 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0817 21:30:24.363049  102247 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0817 21:30:24.363056  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:30:24.363063  102247 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0817 21:30:24.363069  102247 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0817 21:30:24.363075  102247 command_runner.go:130] > # the cgroup blockio controller.
	I0817 21:30:24.363079  102247 command_runner.go:130] > # blockio_config_file = ""
	I0817 21:30:24.363088  102247 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0817 21:30:24.363094  102247 command_runner.go:130] > # irqbalance daemon.
	I0817 21:30:24.363099  102247 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0817 21:30:24.363106  102247 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0817 21:30:24.363113  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:30:24.363119  102247 command_runner.go:130] > # rdt_config_file = ""
	I0817 21:30:24.363125  102247 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0817 21:30:24.363131  102247 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0817 21:30:24.363137  102247 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0817 21:30:24.363144  102247 command_runner.go:130] > # separate_pull_cgroup = ""
	I0817 21:30:24.363150  102247 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0817 21:30:24.363161  102247 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0817 21:30:24.363167  102247 command_runner.go:130] > # will be added.
	I0817 21:30:24.363171  102247 command_runner.go:130] > # default_capabilities = [
	I0817 21:30:24.363177  102247 command_runner.go:130] > # 	"CHOWN",
	I0817 21:30:24.363181  102247 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0817 21:30:24.363187  102247 command_runner.go:130] > # 	"FSETID",
	I0817 21:30:24.363191  102247 command_runner.go:130] > # 	"FOWNER",
	I0817 21:30:24.363196  102247 command_runner.go:130] > # 	"SETGID",
	I0817 21:30:24.363200  102247 command_runner.go:130] > # 	"SETUID",
	I0817 21:30:24.363206  102247 command_runner.go:130] > # 	"SETPCAP",
	I0817 21:30:24.363210  102247 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0817 21:30:24.363216  102247 command_runner.go:130] > # 	"KILL",
	I0817 21:30:24.363220  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363229  102247 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0817 21:30:24.363237  102247 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0817 21:30:24.363244  102247 command_runner.go:130] > # add_inheritable_capabilities = true
	I0817 21:30:24.363250  102247 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0817 21:30:24.363258  102247 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:30:24.363266  102247 command_runner.go:130] > # default_sysctls = [
	I0817 21:30:24.363269  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363276  102247 command_runner.go:130] > # List of devices on the host that a
	I0817 21:30:24.363282  102247 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0817 21:30:24.363289  102247 command_runner.go:130] > # allowed_devices = [
	I0817 21:30:24.363293  102247 command_runner.go:130] > # 	"/dev/fuse",
	I0817 21:30:24.363299  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363305  102247 command_runner.go:130] > # List of additional devices. specified as
	I0817 21:30:24.363340  102247 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0817 21:30:24.363348  102247 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0817 21:30:24.363356  102247 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0817 21:30:24.363360  102247 command_runner.go:130] > # additional_devices = [
	I0817 21:30:24.363366  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363371  102247 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0817 21:30:24.363378  102247 command_runner.go:130] > # cdi_spec_dirs = [
	I0817 21:30:24.363382  102247 command_runner.go:130] > # 	"/etc/cdi",
	I0817 21:30:24.363388  102247 command_runner.go:130] > # 	"/var/run/cdi",
	I0817 21:30:24.363391  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363400  102247 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0817 21:30:24.363408  102247 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0817 21:30:24.363415  102247 command_runner.go:130] > # Defaults to false.
	I0817 21:30:24.363420  102247 command_runner.go:130] > # device_ownership_from_security_context = false
	I0817 21:30:24.363428  102247 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0817 21:30:24.363436  102247 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0817 21:30:24.363442  102247 command_runner.go:130] > # hooks_dir = [
	I0817 21:30:24.363447  102247 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0817 21:30:24.363453  102247 command_runner.go:130] > # ]
	I0817 21:30:24.363459  102247 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0817 21:30:24.363467  102247 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0817 21:30:24.363474  102247 command_runner.go:130] > # its default mounts from the following two files:
	I0817 21:30:24.363478  102247 command_runner.go:130] > #
	I0817 21:30:24.363486  102247 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0817 21:30:24.363494  102247 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0817 21:30:24.363503  102247 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0817 21:30:24.363508  102247 command_runner.go:130] > #
	I0817 21:30:24.363514  102247 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0817 21:30:24.363524  102247 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0817 21:30:24.363533  102247 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0817 21:30:24.363538  102247 command_runner.go:130] > #      only add mounts it finds in this file.
	I0817 21:30:24.363544  102247 command_runner.go:130] > #
	I0817 21:30:24.363548  102247 command_runner.go:130] > # default_mounts_file = ""
	I0817 21:30:24.363555  102247 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0817 21:30:24.363564  102247 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0817 21:30:24.363570  102247 command_runner.go:130] > # pids_limit = 0
	I0817 21:30:24.363577  102247 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0817 21:30:24.363585  102247 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0817 21:30:24.363593  102247 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0817 21:30:24.363602  102247 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0817 21:30:24.363608  102247 command_runner.go:130] > # log_size_max = -1
	I0817 21:30:24.363615  102247 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0817 21:30:24.363622  102247 command_runner.go:130] > # log_to_journald = false
	I0817 21:30:24.363628  102247 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0817 21:30:24.363635  102247 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0817 21:30:24.363640  102247 command_runner.go:130] > # Path to directory for container attach sockets.
	I0817 21:30:24.363649  102247 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0817 21:30:24.363657  102247 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0817 21:30:24.363664  102247 command_runner.go:130] > # bind_mount_prefix = ""
	I0817 21:30:24.363672  102247 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0817 21:30:24.363679  102247 command_runner.go:130] > # read_only = false
	I0817 21:30:24.363685  102247 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0817 21:30:24.363693  102247 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0817 21:30:24.363700  102247 command_runner.go:130] > # live configuration reload.
	I0817 21:30:24.363704  102247 command_runner.go:130] > # log_level = "info"
	I0817 21:30:24.363711  102247 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0817 21:30:24.363716  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:30:24.363722  102247 command_runner.go:130] > # log_filter = ""
	I0817 21:30:24.363729  102247 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0817 21:30:24.363737  102247 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0817 21:30:24.363743  102247 command_runner.go:130] > # separated by comma.
	I0817 21:30:24.363747  102247 command_runner.go:130] > # uid_mappings = ""
	I0817 21:30:24.363755  102247 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0817 21:30:24.363763  102247 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0817 21:30:24.363771  102247 command_runner.go:130] > # separated by comma.
	I0817 21:30:24.363778  102247 command_runner.go:130] > # gid_mappings = ""
	I0817 21:30:24.363784  102247 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0817 21:30:24.363792  102247 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:30:24.363800  102247 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:30:24.363804  102247 command_runner.go:130] > # minimum_mappable_uid = -1
	I0817 21:30:24.363812  102247 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0817 21:30:24.363821  102247 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0817 21:30:24.363829  102247 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0817 21:30:24.363836  102247 command_runner.go:130] > # minimum_mappable_gid = -1
	I0817 21:30:24.363841  102247 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0817 21:30:24.363849  102247 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0817 21:30:24.363857  102247 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0817 21:30:24.363863  102247 command_runner.go:130] > # ctr_stop_timeout = 30
	I0817 21:30:24.363868  102247 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0817 21:30:24.363878  102247 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0817 21:30:24.363885  102247 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0817 21:30:24.363890  102247 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0817 21:30:24.363899  102247 command_runner.go:130] > # drop_infra_ctr = true
	I0817 21:30:24.363909  102247 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0817 21:30:24.363917  102247 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0817 21:30:24.363926  102247 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0817 21:30:24.363933  102247 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0817 21:30:24.363940  102247 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0817 21:30:24.363947  102247 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0817 21:30:24.363951  102247 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0817 21:30:24.363959  102247 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0817 21:30:24.363964  102247 command_runner.go:130] > # pinns_path = ""
	I0817 21:30:24.363972  102247 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0817 21:30:24.363978  102247 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0817 21:30:24.363991  102247 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0817 21:30:24.363998  102247 command_runner.go:130] > # default_runtime = "runc"
	I0817 21:30:24.364003  102247 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0817 21:30:24.364022  102247 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0817 21:30:24.364036  102247 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0817 21:30:24.364044  102247 command_runner.go:130] > # creation as a file is not desired either.
	I0817 21:30:24.364057  102247 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0817 21:30:24.364064  102247 command_runner.go:130] > # the hostname is being managed dynamically.
	I0817 21:30:24.364069  102247 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0817 21:30:24.364075  102247 command_runner.go:130] > # ]
	I0817 21:30:24.364081  102247 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0817 21:30:24.364090  102247 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0817 21:30:24.364100  102247 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0817 21:30:24.364108  102247 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0817 21:30:24.364114  102247 command_runner.go:130] > #
	I0817 21:30:24.364119  102247 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0817 21:30:24.364126  102247 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0817 21:30:24.364132  102247 command_runner.go:130] > #  runtime_type = "oci"
	I0817 21:30:24.364137  102247 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0817 21:30:24.364144  102247 command_runner.go:130] > #  privileged_without_host_devices = false
	I0817 21:30:24.364148  102247 command_runner.go:130] > #  allowed_annotations = []
	I0817 21:30:24.364155  102247 command_runner.go:130] > # Where:
	I0817 21:30:24.364161  102247 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0817 21:30:24.364170  102247 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0817 21:30:24.364179  102247 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0817 21:30:24.364188  102247 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0817 21:30:24.364194  102247 command_runner.go:130] > #   in $PATH.
	I0817 21:30:24.364200  102247 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0817 21:30:24.364207  102247 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0817 21:30:24.364216  102247 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0817 21:30:24.364222  102247 command_runner.go:130] > #   state.
	I0817 21:30:24.364228  102247 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0817 21:30:24.364236  102247 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0817 21:30:24.364244  102247 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0817 21:30:24.364252  102247 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0817 21:30:24.364261  102247 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0817 21:30:24.364270  102247 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0817 21:30:24.364274  102247 command_runner.go:130] > #   The currently recognized values are:
	I0817 21:30:24.364283  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0817 21:30:24.364292  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0817 21:30:24.364300  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0817 21:30:24.364306  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0817 21:30:24.364318  102247 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0817 21:30:24.364327  102247 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0817 21:30:24.364335  102247 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0817 21:30:24.364344  102247 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0817 21:30:24.364351  102247 command_runner.go:130] > #   should be moved to the container's cgroup
	I0817 21:30:24.364355  102247 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0817 21:30:24.364362  102247 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0817 21:30:24.364366  102247 command_runner.go:130] > runtime_type = "oci"
	I0817 21:30:24.364372  102247 command_runner.go:130] > runtime_root = "/run/runc"
	I0817 21:30:24.364377  102247 command_runner.go:130] > runtime_config_path = ""
	I0817 21:30:24.364383  102247 command_runner.go:130] > monitor_path = ""
	I0817 21:30:24.364387  102247 command_runner.go:130] > monitor_cgroup = ""
	I0817 21:30:24.364392  102247 command_runner.go:130] > monitor_exec_cgroup = ""
	I0817 21:30:24.364443  102247 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0817 21:30:24.364453  102247 command_runner.go:130] > # running containers
	I0817 21:30:24.364457  102247 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0817 21:30:24.364462  102247 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0817 21:30:24.364469  102247 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0817 21:30:24.364479  102247 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0817 21:30:24.364487  102247 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0817 21:30:24.364493  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0817 21:30:24.364498  102247 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0817 21:30:24.364504  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0817 21:30:24.364509  102247 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0817 21:30:24.364516  102247 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0817 21:30:24.364522  102247 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0817 21:30:24.364530  102247 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0817 21:30:24.364539  102247 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0817 21:30:24.364549  102247 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0817 21:30:24.364557  102247 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0817 21:30:24.364564  102247 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0817 21:30:24.364575  102247 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0817 21:30:24.364585  102247 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0817 21:30:24.364593  102247 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0817 21:30:24.364602  102247 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0817 21:30:24.364608  102247 command_runner.go:130] > # Example:
	I0817 21:30:24.364614  102247 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0817 21:30:24.364621  102247 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0817 21:30:24.364627  102247 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0817 21:30:24.364634  102247 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0817 21:30:24.364638  102247 command_runner.go:130] > # cpuset = 0
	I0817 21:30:24.364644  102247 command_runner.go:130] > # cpushares = "0-1"
	I0817 21:30:24.364647  102247 command_runner.go:130] > # Where:
	I0817 21:30:24.364652  102247 command_runner.go:130] > # The workload name is workload-type.
	I0817 21:30:24.364661  102247 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0817 21:30:24.364668  102247 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0817 21:30:24.364673  102247 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0817 21:30:24.364683  102247 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0817 21:30:24.364691  102247 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0817 21:30:24.364697  102247 command_runner.go:130] > # 
	I0817 21:30:24.364703  102247 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0817 21:30:24.364708  102247 command_runner.go:130] > #
	I0817 21:30:24.364714  102247 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0817 21:30:24.364723  102247 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0817 21:30:24.364734  102247 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0817 21:30:24.364743  102247 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0817 21:30:24.364748  102247 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0817 21:30:24.364756  102247 command_runner.go:130] > [crio.image]
	I0817 21:30:24.364764  102247 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0817 21:30:24.364771  102247 command_runner.go:130] > # default_transport = "docker://"
	I0817 21:30:24.364777  102247 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0817 21:30:24.364785  102247 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:30:24.364792  102247 command_runner.go:130] > # global_auth_file = ""
	I0817 21:30:24.364797  102247 command_runner.go:130] > # The image used to instantiate infra containers.
	I0817 21:30:24.364804  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:30:24.364809  102247 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0817 21:30:24.364817  102247 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0817 21:30:24.364825  102247 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0817 21:30:24.364832  102247 command_runner.go:130] > # This option supports live configuration reload.
	I0817 21:30:24.364836  102247 command_runner.go:130] > # pause_image_auth_file = ""
	I0817 21:30:24.364844  102247 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0817 21:30:24.364853  102247 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0817 21:30:24.364864  102247 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0817 21:30:24.364872  102247 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0817 21:30:24.364879  102247 command_runner.go:130] > # pause_command = "/pause"
	I0817 21:30:24.364885  102247 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0817 21:30:24.364894  102247 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0817 21:30:24.364902  102247 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0817 21:30:24.364910  102247 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0817 21:30:24.364918  102247 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0817 21:30:24.364924  102247 command_runner.go:130] > # signature_policy = ""
	I0817 21:30:24.364933  102247 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0817 21:30:24.364941  102247 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0817 21:30:24.364948  102247 command_runner.go:130] > # changing them here.
	I0817 21:30:24.364952  102247 command_runner.go:130] > # insecure_registries = [
	I0817 21:30:24.364958  102247 command_runner.go:130] > # ]
	I0817 21:30:24.364963  102247 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0817 21:30:24.364971  102247 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0817 21:30:24.364975  102247 command_runner.go:130] > # image_volumes = "mkdir"
	I0817 21:30:24.364983  102247 command_runner.go:130] > # Temporary directory to use for storing big files
	I0817 21:30:24.364996  102247 command_runner.go:130] > # big_files_temporary_dir = ""
	I0817 21:30:24.365004  102247 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0817 21:30:24.365011  102247 command_runner.go:130] > # CNI plugins.
	I0817 21:30:24.365015  102247 command_runner.go:130] > [crio.network]
	I0817 21:30:24.365023  102247 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0817 21:30:24.365031  102247 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0817 21:30:24.365035  102247 command_runner.go:130] > # cni_default_network = ""
	I0817 21:30:24.365044  102247 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0817 21:30:24.365051  102247 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0817 21:30:24.365056  102247 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0817 21:30:24.365062  102247 command_runner.go:130] > # plugin_dirs = [
	I0817 21:30:24.365066  102247 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0817 21:30:24.365072  102247 command_runner.go:130] > # ]
	I0817 21:30:24.365077  102247 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0817 21:30:24.365083  102247 command_runner.go:130] > [crio.metrics]
	I0817 21:30:24.365088  102247 command_runner.go:130] > # Globally enable or disable metrics support.
	I0817 21:30:24.365094  102247 command_runner.go:130] > # enable_metrics = false
	I0817 21:30:24.365098  102247 command_runner.go:130] > # Specify enabled metrics collectors.
	I0817 21:30:24.365105  102247 command_runner.go:130] > # Per default all metrics are enabled.
	I0817 21:30:24.365113  102247 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0817 21:30:24.365121  102247 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0817 21:30:24.365130  102247 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0817 21:30:24.365136  102247 command_runner.go:130] > # metrics_collectors = [
	I0817 21:30:24.365140  102247 command_runner.go:130] > # 	"operations",
	I0817 21:30:24.365147  102247 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0817 21:30:24.365151  102247 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0817 21:30:24.365158  102247 command_runner.go:130] > # 	"operations_errors",
	I0817 21:30:24.365162  102247 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0817 21:30:24.365168  102247 command_runner.go:130] > # 	"image_pulls_by_name",
	I0817 21:30:24.365172  102247 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0817 21:30:24.365178  102247 command_runner.go:130] > # 	"image_pulls_failures",
	I0817 21:30:24.365183  102247 command_runner.go:130] > # 	"image_pulls_successes",
	I0817 21:30:24.365189  102247 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0817 21:30:24.365193  102247 command_runner.go:130] > # 	"image_layer_reuse",
	I0817 21:30:24.365200  102247 command_runner.go:130] > # 	"containers_oom_total",
	I0817 21:30:24.365204  102247 command_runner.go:130] > # 	"containers_oom",
	I0817 21:30:24.365213  102247 command_runner.go:130] > # 	"processes_defunct",
	I0817 21:30:24.365219  102247 command_runner.go:130] > # 	"operations_total",
	I0817 21:30:24.365223  102247 command_runner.go:130] > # 	"operations_latency_seconds",
	I0817 21:30:24.365230  102247 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0817 21:30:24.365235  102247 command_runner.go:130] > # 	"operations_errors_total",
	I0817 21:30:24.365241  102247 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0817 21:30:24.365246  102247 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0817 21:30:24.365252  102247 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0817 21:30:24.365257  102247 command_runner.go:130] > # 	"image_pulls_success_total",
	I0817 21:30:24.365263  102247 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0817 21:30:24.365267  102247 command_runner.go:130] > # 	"containers_oom_count_total",
	I0817 21:30:24.365271  102247 command_runner.go:130] > # ]
	I0817 21:30:24.365276  102247 command_runner.go:130] > # The port on which the metrics server will listen.
	I0817 21:30:24.365282  102247 command_runner.go:130] > # metrics_port = 9090
	I0817 21:30:24.365287  102247 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0817 21:30:24.365293  102247 command_runner.go:130] > # metrics_socket = ""
	I0817 21:30:24.365298  102247 command_runner.go:130] > # The certificate for the secure metrics server.
	I0817 21:30:24.365306  102247 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0817 21:30:24.365316  102247 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0817 21:30:24.365323  102247 command_runner.go:130] > # certificate on any modification event.
	I0817 21:30:24.365327  102247 command_runner.go:130] > # metrics_cert = ""
	I0817 21:30:24.365335  102247 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0817 21:30:24.365339  102247 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0817 21:30:24.365345  102247 command_runner.go:130] > # metrics_key = ""
	I0817 21:30:24.365351  102247 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0817 21:30:24.365357  102247 command_runner.go:130] > [crio.tracing]
	I0817 21:30:24.365362  102247 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0817 21:30:24.365369  102247 command_runner.go:130] > # enable_tracing = false
	I0817 21:30:24.365374  102247 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0817 21:30:24.365381  102247 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0817 21:30:24.365386  102247 command_runner.go:130] > # Number of samples to collect per million spans.
	I0817 21:30:24.365393  102247 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0817 21:30:24.365399  102247 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0817 21:30:24.365404  102247 command_runner.go:130] > [crio.stats]
	I0817 21:30:24.365410  102247 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0817 21:30:24.365418  102247 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0817 21:30:24.365427  102247 command_runner.go:130] > # stats_collection_period = 0
	I0817 21:30:24.365500  102247 cni.go:84] Creating CNI manager for ""
	I0817 21:30:24.365509  102247 cni.go:136] 2 nodes found, recommending kindnet
	I0817 21:30:24.365517  102247 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0817 21:30:24.365537  102247 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-938028 NodeName:multinode-938028-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0817 21:30:24.365660  102247 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-938028-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0817 21:30:24.365708  102247 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-938028-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0817 21:30:24.365771  102247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0817 21:30:24.372762  102247 command_runner.go:130] > kubeadm
	I0817 21:30:24.372899  102247 command_runner.go:130] > kubectl
	I0817 21:30:24.373057  102247 command_runner.go:130] > kubelet
	I0817 21:30:24.374492  102247 binaries.go:44] Found k8s binaries, skipping transfer
	I0817 21:30:24.374554  102247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0817 21:30:24.381986  102247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0817 21:30:24.397020  102247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0817 21:30:24.411804  102247 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0817 21:30:24.414712  102247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0817 21:30:24.424150  102247 host.go:66] Checking if "multinode-938028" exists ...
	I0817 21:30:24.424355  102247 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:30:24.424417  102247 start.go:301] JoinCluster: &{Name:multinode-938028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-938028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:30:24.424517  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0817 21:30:24.424564  102247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:30:24.440583  102247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:30:24.581852  102247 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pk34cw.z27puo536cwwl54b --discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 
	I0817 21:30:24.581931  102247 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:30:24.581984  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pk34cw.z27puo536cwwl54b --discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-938028-m02"
	I0817 21:30:24.614451  102247 command_runner.go:130] > [preflight] Running pre-flight checks
	I0817 21:30:24.641755  102247 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0817 21:30:24.641786  102247 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-gcp
	I0817 21:30:24.641795  102247 command_runner.go:130] > OS: Linux
	I0817 21:30:24.641803  102247 command_runner.go:130] > CGROUPS_CPU: enabled
	I0817 21:30:24.641811  102247 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0817 21:30:24.641816  102247 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0817 21:30:24.641825  102247 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0817 21:30:24.641830  102247 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0817 21:30:24.641835  102247 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0817 21:30:24.641842  102247 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0817 21:30:24.641848  102247 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0817 21:30:24.641854  102247 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0817 21:30:24.716817  102247 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0817 21:30:24.716851  102247 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0817 21:30:24.741591  102247 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0817 21:30:24.741644  102247 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0817 21:30:24.741653  102247 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0817 21:30:24.812083  102247 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0817 21:30:26.825253  102247 command_runner.go:130] > This node has joined the cluster:
	I0817 21:30:26.825275  102247 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0817 21:30:26.825291  102247 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0817 21:30:26.825298  102247 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0817 21:30:26.827680  102247 command_runner.go:130] ! W0817 21:30:24.614027    1105 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0817 21:30:26.827705  102247 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0817 21:30:26.827716  102247 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0817 21:30:26.827733  102247 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pk34cw.z27puo536cwwl54b --discovery-token-ca-cert-hash sha256:6990f7150c46d703a60b6aaa6f152cf1f359295cabe399f949b0e443e5fdc599 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-938028-m02": (2.245734952s)
	I0817 21:30:26.827751  102247 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0817 21:30:26.916545  102247 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0817 21:30:26.990116  102247 start.go:303] JoinCluster complete in 2.565694158s
	I0817 21:30:26.990146  102247 cni.go:84] Creating CNI manager for ""
	I0817 21:30:26.990152  102247 cni.go:136] 2 nodes found, recommending kindnet
	I0817 21:30:26.990212  102247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0817 21:30:26.993457  102247 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0817 21:30:26.993487  102247 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0817 21:30:26.993496  102247 command_runner.go:130] > Device: 33h/51d	Inode: 838607      Links: 1
	I0817 21:30:26.993503  102247 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0817 21:30:26.993509  102247 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0817 21:30:26.993514  102247 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0817 21:30:26.993518  102247 command_runner.go:130] > Change: 2023-08-17 21:10:53.952483634 +0000
	I0817 21:30:26.993523  102247 command_runner.go:130] >  Birth: 2023-08-17 21:10:53.932481714 +0000
	I0817 21:30:26.993611  102247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0817 21:30:26.993622  102247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0817 21:30:27.008757  102247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0817 21:30:27.251417  102247 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:30:27.254690  102247 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0817 21:30:27.256818  102247 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0817 21:30:27.266902  102247 command_runner.go:130] > daemonset.apps/kindnet configured
	I0817 21:30:27.271058  102247 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:27.271260  102247 kapi.go:59] client config for multinode-938028: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:30:27.271531  102247 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0817 21:30:27.271542  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:27.271549  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:27.271555  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:27.273325  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:27.273341  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:27.273348  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:27.273353  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:27.273359  102247 round_trippers.go:580]     Content-Length: 291
	I0817 21:30:27.273367  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:27 GMT
	I0817 21:30:27.273373  102247 round_trippers.go:580]     Audit-Id: d3111166-f1c6-4e0c-b32b-176dc94138c0
	I0817 21:30:27.273380  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:27.273387  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:27.273411  102247 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7aa8886c-0fdb-4400-87f8-3d24dd96a241","resourceVersion":"439","creationTimestamp":"2023-08-17T21:29:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0817 21:30:27.273494  102247 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-938028" context rescaled to 1 replicas
	I0817 21:30:27.273519  102247 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0817 21:30:27.275928  102247 out.go:177] * Verifying Kubernetes components...
	I0817 21:30:27.277542  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:30:27.287783  102247 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:30:27.287987  102247 kapi.go:59] client config for multinode-938028: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.crt", KeyFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/profiles/multinode-938028/client.key", CAFile:"/home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d28680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0817 21:30:27.288201  102247 node_ready.go:35] waiting up to 6m0s for node "multinode-938028-m02" to be "Ready" ...
	I0817 21:30:27.288252  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:27.288259  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:27.288267  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:27.288276  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:27.290340  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:27.290362  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:27.290373  102247 round_trippers.go:580]     Audit-Id: a9c5e46c-f155-4b31-9f4f-7ac1b3493a9e
	I0817 21:30:27.290382  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:27.290395  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:27.290407  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:27.290418  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:27.290425  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:27 GMT
	I0817 21:30:27.290556  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028-m02","uid":"b70d84a5-55aa-4b48-bb5f-105ae56d0b2b","resourceVersion":"491","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5210 chars]
	I0817 21:30:27.290876  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:27.290889  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:27.290896  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:27.290902  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:27.292616  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:27.292636  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:27.292646  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:27.292655  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:27.292667  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:27.292680  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:27 GMT
	I0817 21:30:27.292692  102247 round_trippers.go:580]     Audit-Id: c875bd89-2bca-4ad5-8b8b-772a7ea56e0b
	I0817 21:30:27.292704  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:27.292842  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028-m02","uid":"b70d84a5-55aa-4b48-bb5f-105ae56d0b2b","resourceVersion":"491","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5210 chars]
	I0817 21:30:27.793816  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:27.793836  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:27.793844  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:27.793851  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:27.796389  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:27.796409  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:27.796417  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:27 GMT
	I0817 21:30:27.796423  102247 round_trippers.go:580]     Audit-Id: 62cf2ae8-a7e4-45dc-a97d-6e1e278cd099
	I0817 21:30:27.796428  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:27.796434  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:27.796439  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:27.796445  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:27.796549  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028-m02","uid":"b70d84a5-55aa-4b48-bb5f-105ae56d0b2b","resourceVersion":"491","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:2
6Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5210 chars]
	I0817 21:30:28.294043  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:28.294063  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.294072  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.294078  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.296116  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:28.296138  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.296149  102247 round_trippers.go:580]     Audit-Id: 5212d0e8-60a8-4cd9-9480-0cd389419f5f
	I0817 21:30:28.296158  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.296167  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.296178  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.296186  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.296194  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.296306  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028-m02","uid":"b70d84a5-55aa-4b48-bb5f-105ae56d0b2b","resourceVersion":"503","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0817 21:30:28.296584  102247 node_ready.go:49] node "multinode-938028-m02" has status "Ready":"True"
	I0817 21:30:28.296597  102247 node_ready.go:38] duration metric: took 1.00838369s waiting for node "multinode-938028-m02" to be "Ready" ...
	I0817 21:30:28.296604  102247 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:30:28.296652  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0817 21:30:28.296660  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.296667  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.296673  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.299909  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:28.299933  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.299943  102247 round_trippers.go:580]     Audit-Id: 86b48b0e-fb55-4727-bef7-5f19e4a7e3ea
	I0817 21:30:28.299952  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.299960  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.299969  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.299982  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.299995  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.300452  102247 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"435","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0817 21:30:28.302518  102247 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.302579  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-klmz7
	I0817 21:30:28.302588  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.302595  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.302602  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.304364  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.304384  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.304394  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.304402  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.304410  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.304420  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.304430  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.304440  102247 round_trippers.go:580]     Audit-Id: 7b93b2b3-c094-4638-a891-acd5d1ddc667
	I0817 21:30:28.304563  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-klmz7","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"9cb10fd3-6480-47a0-8698-0573bb8dbfd1","resourceVersion":"435","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"2c31a337-e481-47f2-9524-9a6e8cf199fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c31a337-e481-47f2-9524-9a6e8cf199fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0817 21:30:28.304965  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:28.304978  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.304988  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.304997  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.306753  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.306768  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.306774  102247 round_trippers.go:580]     Audit-Id: da7650b7-b211-48ce-9467-93b0824046d1
	I0817 21:30:28.306779  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.306784  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.306789  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.306796  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.306801  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.306910  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:28.307167  102247 pod_ready.go:92] pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:28.307177  102247 pod_ready.go:81] duration metric: took 4.642302ms waiting for pod "coredns-5d78c9869d-klmz7" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.307184  102247 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.307221  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-938028
	I0817 21:30:28.307229  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.307236  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.307242  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.309182  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.309199  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.309208  102247 round_trippers.go:580]     Audit-Id: 9bd34dc8-1e86-4d5f-a8f9-bc461b8e714f
	I0817 21:30:28.309216  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.309225  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.309235  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.309248  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.309259  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.309360  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-938028","namespace":"kube-system","uid":"8467d526-5134-4571-bd8b-37cba78ca9a6","resourceVersion":"452","creationTimestamp":"2023-08-17T21:29:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.mirror":"a18bac1e6ded925776f8b635b518e616","kubernetes.io/config.seen":"2023-08-17T21:29:47.593280067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0817 21:30:28.309682  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:28.309694  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.309701  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.309707  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.311311  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.311325  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.311332  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.311346  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.311355  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.311369  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.311382  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.311394  102247 round_trippers.go:580]     Audit-Id: baa0fe1d-e9e9-4b18-8f07-919afb8f7c0b
	I0817 21:30:28.311541  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:28.311821  102247 pod_ready.go:92] pod "etcd-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:28.311834  102247 pod_ready.go:81] duration metric: took 4.643938ms waiting for pod "etcd-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.311846  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.311886  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-938028
	I0817 21:30:28.311893  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.311900  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.311906  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.313523  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.313541  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.313548  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.313564  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.313572  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.313581  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.313590  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.313604  102247 round_trippers.go:580]     Audit-Id: 292b7d65-e6e3-46cc-9577-6151aceebc0a
	I0817 21:30:28.313688  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-938028","namespace":"kube-system","uid":"6dac9864-4745-4595-9cc1-a8ce957c247c","resourceVersion":"453","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d26f6070344b4e89652ceba8dd748820","kubernetes.io/config.mirror":"d26f6070344b4e89652ceba8dd748820","kubernetes.io/config.seen":"2023-08-17T21:29:53.969209809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0817 21:30:28.314089  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:28.314102  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.314109  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.314115  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.315649  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.315663  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.315670  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.315675  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.315681  102247 round_trippers.go:580]     Audit-Id: d536e276-4b58-4548-94d3-1be07c7d4951
	I0817 21:30:28.315686  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.315692  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.315697  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.315799  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:28.316055  102247 pod_ready.go:92] pod "kube-apiserver-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:28.316067  102247 pod_ready.go:81] duration metric: took 4.211536ms waiting for pod "kube-apiserver-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.316074  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.316112  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-938028
	I0817 21:30:28.316119  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.316125  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.316131  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.319405  102247 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0817 21:30:28.319426  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.319436  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.319445  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.319453  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.319467  102247 round_trippers.go:580]     Audit-Id: f3137c56-431c-4fe2-8f7d-f951132087cb
	I0817 21:30:28.319479  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.319492  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.319623  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-938028","namespace":"kube-system","uid":"4089bb0e-1099-40d1-9df4-68943ea6fb68","resourceVersion":"455","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b5f752a96b068e4da65f4bf187b99598","kubernetes.io/config.mirror":"b5f752a96b068e4da65f4bf187b99598","kubernetes.io/config.seen":"2023-08-17T21:29:53.969211526Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0817 21:30:28.320006  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:28.320019  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.320029  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.320041  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.321518  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.321537  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.321547  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.321556  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.321564  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.321577  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.321589  102247 round_trippers.go:580]     Audit-Id: c84a6d9d-b92d-445e-a5f9-16cf3e6edd03
	I0817 21:30:28.321602  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.321704  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:28.322022  102247 pod_ready.go:92] pod "kube-controller-manager-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:28.322038  102247 pod_ready.go:81] duration metric: took 5.957278ms waiting for pod "kube-controller-manager-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.322051  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9hrr6" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.494448  102247 request.go:628] Waited for 172.321036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hrr6
	I0817 21:30:28.494520  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9hrr6
	I0817 21:30:28.494531  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.494543  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.494556  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.496533  102247 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0817 21:30:28.496559  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.496570  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.496580  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.496590  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.496602  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.496610  102247 round_trippers.go:580]     Audit-Id: 4a252cf5-2c35-4829-a88b-38f040fb5c5f
	I0817 21:30:28.496622  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.496765  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9hrr6","generateName":"kube-proxy-","namespace":"kube-system","uid":"8dbedd3f-c5c4-4403-8163-4d208d1239b4","resourceVersion":"506","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"831ddc65-acb9-4009-a551-276dd84b70e8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831ddc65-acb9-4009-a551-276dd84b70e8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0817 21:30:28.694623  102247 request.go:628] Waited for 197.349998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:28.694677  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028-m02
	I0817 21:30:28.694681  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.694689  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.694695  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.697256  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:28.697280  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.697290  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.697299  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.697307  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.697314  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.697322  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.697331  102247 round_trippers.go:580]     Audit-Id: 58cb0649-cb7b-49bd-a28a-8fe2df9987e5
	I0817 21:30:28.697451  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028-m02","uid":"b70d84a5-55aa-4b48-bb5f-105ae56d0b2b","resourceVersion":"503","creationTimestamp":"2023-08-17T21:30:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0817 21:30:28.697855  102247 pod_ready.go:92] pod "kube-proxy-9hrr6" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:28.697875  102247 pod_ready.go:81] duration metric: took 375.811102ms waiting for pod "kube-proxy-9hrr6" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.697887  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf5b5" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:28.894245  102247 request.go:628] Waited for 196.266847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bf5b5
	I0817 21:30:28.894305  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bf5b5
	I0817 21:30:28.894310  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:28.894318  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:28.894324  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:28.896571  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:28.896597  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:28.896607  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:28.896617  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:28.896626  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:28.896635  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:28.896644  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:28 GMT
	I0817 21:30:28.896654  102247 round_trippers.go:580]     Audit-Id: 532e48cd-1b29-4a2c-8bb8-7daf39c158c7
	I0817 21:30:28.896822  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bf5b5","generateName":"kube-proxy-","namespace":"kube-system","uid":"39b3791d-3973-4cb6-ac55-eecde2f2fd0f","resourceVersion":"419","creationTimestamp":"2023-08-17T21:30:06Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"831ddc65-acb9-4009-a551-276dd84b70e8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:30:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"831ddc65-acb9-4009-a551-276dd84b70e8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0817 21:30:29.094627  102247 request.go:628] Waited for 197.370686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:29.094677  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:29.094682  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:29.094689  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:29.094695  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:29.097176  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:29.097201  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:29.097212  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:29.097221  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:29.097233  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:29.097242  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:29.097251  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:29 GMT
	I0817 21:30:29.097256  102247 round_trippers.go:580]     Audit-Id: 5bd004e5-932d-4e91-8705-1543e76f3603
	I0817 21:30:29.097380  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:29.097728  102247 pod_ready.go:92] pod "kube-proxy-bf5b5" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:29.097743  102247 pod_ready.go:81] duration metric: took 399.82322ms waiting for pod "kube-proxy-bf5b5" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:29.097752  102247 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:29.295060  102247 request.go:628] Waited for 197.250622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-938028
	I0817 21:30:29.295122  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-938028
	I0817 21:30:29.295127  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:29.295134  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:29.295141  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:29.297355  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:29.297371  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:29.297378  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:29.297384  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:29.297389  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:29.297395  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:29.297400  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:29 GMT
	I0817 21:30:29.297407  102247 round_trippers.go:580]     Audit-Id: 26eed479-9da1-4870-922b-516a7018f0f3
	I0817 21:30:29.297581  102247 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-938028","namespace":"kube-system","uid":"ec4e68df-918c-4e2c-b757-5117e84954d2","resourceVersion":"454","creationTimestamp":"2023-08-17T21:29:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0f42571bbc769848748479d22483ba61","kubernetes.io/config.mirror":"0f42571bbc769848748479d22483ba61","kubernetes.io/config.seen":"2023-08-17T21:29:53.969212656Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-17T21:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0817 21:30:29.494407  102247 request.go:628] Waited for 196.354229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:29.494470  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-938028
	I0817 21:30:29.494475  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:29.494483  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:29.494491  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:29.496829  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:29.496850  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:29.496859  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:29.496866  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:29.496873  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:29.496881  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:29.496889  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:29 GMT
	I0817 21:30:29.496899  102247 round_trippers.go:580]     Audit-Id: 11994ae8-c192-4bcf-9d8a-4fe42852a396
	I0817 21:30:29.497053  102247 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-17T21:29:50Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0817 21:30:29.497366  102247 pod_ready.go:92] pod "kube-scheduler-multinode-938028" in "kube-system" namespace has status "Ready":"True"
	I0817 21:30:29.497381  102247 pod_ready.go:81] duration metric: took 399.622176ms waiting for pod "kube-scheduler-multinode-938028" in "kube-system" namespace to be "Ready" ...
	I0817 21:30:29.497395  102247 pod_ready.go:38] duration metric: took 1.200779971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0817 21:30:29.497413  102247 system_svc.go:44] waiting for kubelet service to be running ....
	I0817 21:30:29.497573  102247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:30:29.508167  102247 system_svc.go:56] duration metric: took 10.746282ms WaitForService to wait for kubelet.
	I0817 21:30:29.508190  102247 kubeadm.go:581] duration metric: took 2.234649066s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0817 21:30:29.508221  102247 node_conditions.go:102] verifying NodePressure condition ...
	I0817 21:30:29.694612  102247 request.go:628] Waited for 186.315121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0817 21:30:29.694676  102247 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0817 21:30:29.694681  102247 round_trippers.go:469] Request Headers:
	I0817 21:30:29.694688  102247 round_trippers.go:473]     Accept: application/json, */*
	I0817 21:30:29.694700  102247 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0817 21:30:29.697215  102247 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0817 21:30:29.697236  102247 round_trippers.go:577] Response Headers:
	I0817 21:30:29.697244  102247 round_trippers.go:580]     Audit-Id: e0e4ee2c-8a74-41e8-a5f7-d5b588bf009a
	I0817 21:30:29.697250  102247 round_trippers.go:580]     Cache-Control: no-cache, private
	I0817 21:30:29.697256  102247 round_trippers.go:580]     Content-Type: application/json
	I0817 21:30:29.697265  102247 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86c5f0dc-1f90-4d86-b4fc-0a9017188716
	I0817 21:30:29.697273  102247 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2cc0e188-4cf5-40c4-80ec-2fb828b13277
	I0817 21:30:29.697281  102247 round_trippers.go:580]     Date: Thu, 17 Aug 2023 21:30:29 GMT
	I0817 21:30:29.697448  102247 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"multinode-938028","uid":"3792e48f-24a1-46e6-b3af-b885860d4a19","resourceVersion":"425","creationTimestamp":"2023-08-17T21:29:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-938028","kubernetes.io/os":"linux","minikube.k8s.io/commit":"887b29127f76723e975982e9ba9e8c24f3dd2612","minikube.k8s.io/name":"multinode-938028","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_17T21_29_54_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0817 21:30:29.697933  102247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0817 21:30:29.697948  102247 node_conditions.go:123] node cpu capacity is 8
	I0817 21:30:29.697956  102247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0817 21:30:29.697961  102247 node_conditions.go:123] node cpu capacity is 8
	I0817 21:30:29.697965  102247 node_conditions.go:105] duration metric: took 189.738305ms to run NodePressure ...
	I0817 21:30:29.697977  102247 start.go:228] waiting for startup goroutines ...
	I0817 21:30:29.698012  102247 start.go:242] writing updated cluster config ...
	I0817 21:30:29.698281  102247 ssh_runner.go:195] Run: rm -f paused
	I0817 21:30:29.743755  102247 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0817 21:30:29.747079  102247 out.go:177] * Done! kubectl is now configured to use "multinode-938028" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 17 21:30:10 multinode-938028 crio[958]: time="2023-08-17 21:30:10.285963478Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6876125688ea2ab7977e2daa2e28e3b6127af4e48eb17f4adffa6826436cce09/merged/etc/passwd: no such file or directory"
	Aug 17 21:30:10 multinode-938028 crio[958]: time="2023-08-17 21:30:10.286010018Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6876125688ea2ab7977e2daa2e28e3b6127af4e48eb17f4adffa6826436cce09/merged/etc/group: no such file or directory"
	Aug 17 21:30:10 multinode-938028 crio[958]: time="2023-08-17 21:30:10.324302528Z" level=info msg="Created container 01c546db6d91150fa2ec8f6e3131082dab995c3b78d4080a02d8993b52c22625: kube-system/storage-provisioner/storage-provisioner" id=3004c2d0-9706-4d02-9227-f0fed335a5fb name=/runtime.v1.RuntimeService/CreateContainer
	Aug 17 21:30:10 multinode-938028 crio[958]: time="2023-08-17 21:30:10.324923139Z" level=info msg="Starting container: 01c546db6d91150fa2ec8f6e3131082dab995c3b78d4080a02d8993b52c22625" id=9a11b285-ce94-4b5b-94da-21720416146d name=/runtime.v1.RuntimeService/StartContainer
	Aug 17 21:30:10 multinode-938028 crio[958]: time="2023-08-17 21:30:10.333281698Z" level=info msg="Started container" PID=2378 containerID=01c546db6d91150fa2ec8f6e3131082dab995c3b78d4080a02d8993b52c22625 description=kube-system/storage-provisioner/storage-provisioner id=9a11b285-ce94-4b5b-94da-21720416146d name=/runtime.v1.RuntimeService/StartContainer sandboxID=04481b0a60dab9810106056e198a8fce686d94f954e67626fb7d574b0ae7391f
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.749882821Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-b9qpl/POD" id=42325992-7ae0-4111-b7fc-30154b53b494 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.749976720Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.763473910Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-b9qpl Namespace:default ID:9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a UID:0db905a2-6d6c-413b-ac9c-ed80d6ac087d NetNS:/var/run/netns/ee71d65b-ac8a-4b5d-ab2e-f70478306c4d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.763513180Z" level=info msg="Adding pod default_busybox-67b7f59bb-b9qpl to CNI network \"kindnet\" (type=ptp)"
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.772796395Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-b9qpl Namespace:default ID:9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a UID:0db905a2-6d6c-413b-ac9c-ed80d6ac087d NetNS:/var/run/netns/ee71d65b-ac8a-4b5d-ab2e-f70478306c4d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.772954312Z" level=info msg="Checking pod default_busybox-67b7f59bb-b9qpl for CNI network kindnet (type=ptp)"
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.809072147Z" level=info msg="Ran pod sandbox 9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a with infra container: default/busybox-67b7f59bb-b9qpl/POD" id=42325992-7ae0-4111-b7fc-30154b53b494 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.810145946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b402a989-b9b1-432c-9563-1c1a5dfcec48 name=/runtime.v1.ImageService/ImageStatus
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.810357720Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b402a989-b9b1-432c-9563-1c1a5dfcec48 name=/runtime.v1.ImageService/ImageStatus
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.811138884Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=1c5bb16c-deac-43a1-ab48-2b9759ab7b5a name=/runtime.v1.ImageService/PullImage
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.814748750Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 17 21:30:30 multinode-938028 crio[958]: time="2023-08-17 21:30:30.967342885Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.399622507Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=1c5bb16c-deac-43a1-ab48-2b9759ab7b5a name=/runtime.v1.ImageService/PullImage
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.400492059Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a1218e7e-d241-40e9-8e96-0f6a79285c77 name=/runtime.v1.ImageService/ImageStatus
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.401181919Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a1218e7e-d241-40e9-8e96-0f6a79285c77 name=/runtime.v1.ImageService/ImageStatus
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.402013469Z" level=info msg="Creating container: default/busybox-67b7f59bb-b9qpl/busybox" id=292dcde4-66f4-46de-a422-acb6bfc0aa60 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.402127231Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.448679483Z" level=info msg="Created container b4bb08f62bd70ded2f2b3cc5648a14f0b962b0084d994321314e1b3b8506335d: default/busybox-67b7f59bb-b9qpl/busybox" id=292dcde4-66f4-46de-a422-acb6bfc0aa60 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.449248830Z" level=info msg="Starting container: b4bb08f62bd70ded2f2b3cc5648a14f0b962b0084d994321314e1b3b8506335d" id=024888c0-eebf-4f0e-8f82-552290bc040e name=/runtime.v1.RuntimeService/StartContainer
	Aug 17 21:30:31 multinode-938028 crio[958]: time="2023-08-17 21:30:31.458397917Z" level=info msg="Started container" PID=2507 containerID=b4bb08f62bd70ded2f2b3cc5648a14f0b962b0084d994321314e1b3b8506335d description=default/busybox-67b7f59bb-b9qpl/busybox id=024888c0-eebf-4f0e-8f82-552290bc040e name=/runtime.v1.RuntimeService/StartContainer sandboxID=9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4bb08f62bd70       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago       Running             busybox                   0                   9870676a3721b       busybox-67b7f59bb-b9qpl
	01c546db6d911       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago      Running             storage-provisioner       0                   04481b0a60dab       storage-provisioner
	5851bc5dbf513       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      26 seconds ago      Running             coredns                   0                   7ca38b15f6db6       coredns-5d78c9869d-klmz7
	cd830225b55d9       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      28 seconds ago      Running             kindnet-cni               0                   115fe3af3fcb5       kindnet-qm6gj
	50a4e347742c3       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                      28 seconds ago      Running             kube-proxy                0                   cd8c76513f80b       kube-proxy-bf5b5
	70d909f4e3514       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                      47 seconds ago      Running             kube-controller-manager   0                   250c887c5612f       kube-controller-manager-multinode-938028
	e9114519839bf       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                      47 seconds ago      Running             kube-apiserver            0                   47485af9f694c       kube-apiserver-multinode-938028
	cf8ef19e50d8b       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      47 seconds ago      Running             etcd                      0                   ee7f705ae3372       etcd-multinode-938028
	16abeca7f5f1d       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                      47 seconds ago      Running             kube-scheduler            0                   c7de22ad86ddf       kube-scheduler-multinode-938028
	
	* 
	* ==> coredns [5851bc5dbf5130d1219e1e65547bedeb1f5583daf8d31ed012c03d7494f9d614] <==
	* [INFO] 10.244.0.3:59075 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074611s
	[INFO] 10.244.1.2:38285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109198s
	[INFO] 10.244.1.2:59087 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001296913s
	[INFO] 10.244.1.2:53248 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070466s
	[INFO] 10.244.1.2:37164 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007184s
	[INFO] 10.244.1.2:40597 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000870127s
	[INFO] 10.244.1.2:50836 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000046045s
	[INFO] 10.244.1.2:57267 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070109s
	[INFO] 10.244.1.2:39326 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056464s
	[INFO] 10.244.0.3:43720 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094386s
	[INFO] 10.244.0.3:44133 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088746s
	[INFO] 10.244.0.3:44172 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041828s
	[INFO] 10.244.0.3:51262 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052784s
	[INFO] 10.244.1.2:44810 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108209s
	[INFO] 10.244.1.2:57279 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105327s
	[INFO] 10.244.1.2:45865 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046922s
	[INFO] 10.244.1.2:55956 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065087s
	[INFO] 10.244.0.3:55045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094189s
	[INFO] 10.244.0.3:35737 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105753s
	[INFO] 10.244.0.3:49218 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107781s
	[INFO] 10.244.0.3:33611 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010137s
	[INFO] 10.244.1.2:36023 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110757s
	[INFO] 10.244.1.2:53258 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084138s
	[INFO] 10.244.1.2:39358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000050462s
	[INFO] 10.244.1.2:51179 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008335s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-938028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-938028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=887b29127f76723e975982e9ba9e8c24f3dd2612
	                    minikube.k8s.io/name=multinode-938028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_17T21_29_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:29:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-938028
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 17 Aug 2023 21:30:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:30:08 +0000   Thu, 17 Aug 2023 21:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:30:08 +0000   Thu, 17 Aug 2023 21:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:30:08 +0000   Thu, 17 Aug 2023 21:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:30:08 +0000   Thu, 17 Aug 2023 21:30:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-938028
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f44904d0edb48f090dfe74388a6bb9f
	  System UUID:                fed40584-c6ef-4645-b491-d64ffe16740c
	  Boot ID:                    8d1de0dd-e970-4922-97d1-4b473b3fd1c5
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-b9qpl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5d78c9869d-klmz7                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     29s
	  kube-system                 etcd-multinode-938028                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         42s
	  kube-system                 kindnet-qm6gj                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      29s
	  kube-system                 kube-apiserver-multinode-938028             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-controller-manager-multinode-938028    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-proxy-bf5b5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-multinode-938028             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node multinode-938028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node multinode-938028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node multinode-938028 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node multinode-938028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node multinode-938028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node multinode-938028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node multinode-938028 event: Registered Node multinode-938028 in Controller
	  Normal  NodeReady                27s                kubelet          Node multinode-938028 status is now: NodeReady
	
	
	Name:               multinode-938028-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-938028-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 17 Aug 2023 21:30:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-938028-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 17 Aug 2023 21:30:27 +0000   Thu, 17 Aug 2023 21:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 17 Aug 2023 21:30:27 +0000   Thu, 17 Aug 2023 21:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 17 Aug 2023 21:30:27 +0000   Thu, 17 Aug 2023 21:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 17 Aug 2023 21:30:27 +0000   Thu, 17 Aug 2023 21:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-938028-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1cc227406f040e393946a957fab315e
	  System UUID:                bf49c1d4-f4d9-404d-a3f0-094428a391cd
	  Boot ID:                    8d1de0dd-e970-4922-97d1-4b473b3fd1c5
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-khspl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-w8k8m              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-9hrr6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 7s                kube-proxy       
	  Normal  RegisteredNode           9s                node-controller  Node multinode-938028-m02 event: Registered Node multinode-938028-m02 in Controller
	  Normal  NodeHasSufficientMemory  9s (x5 over 10s)  kubelet          Node multinode-938028-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 10s)  kubelet          Node multinode-938028-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 10s)  kubelet          Node multinode-938028-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8s                kubelet          Node multinode-938028-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004939] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006659] FS-Cache: N-cookie d=00000000fedf765e{9p.inode} n=00000000579d86d3
	[  +0.008741] FS-Cache: N-key=[8] '80a00f0200000000'
	[  +0.355249] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006749] FS-Cache: O-cookie d=00000000fedf765e{9p.inode} n=00000000a79b8bcd
	[  +0.007355] FS-Cache: O-key=[8] '8da00f0200000000'
	[  +0.004965] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007940] FS-Cache: N-cookie d=00000000fedf765e{9p.inode} n=0000000043d232ba
	[  +0.008746] FS-Cache: N-key=[8] '8da00f0200000000'
	[ +21.449794] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug17 21:21] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +1.024489] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[Aug17 21:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +4.159602] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[  +8.191196] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[ +16.126446] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	[Aug17 21:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 16 a0 ec 28 47 3d 8a 1e 5a be 09 3d 08 00
	
	* 
	* ==> etcd [cf8ef19e50d8b925974a428398b12c3de1f43fc4395de53d0ab04eddf2164e91] <==
	* {"level":"info","ts":"2023-08-17T21:29:48.353Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-17T21:29:48.353Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-17T21:29:48.353Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-17T21:29:48.353Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-17T21:29:48.353Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-17T21:29:48.354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-08-17T21:29:48.354Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-17T21:29:48.841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-17T21:29:48.842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-938028 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:29:48.843Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-17T21:29:48.844Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-17T21:29:48.844Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:30:35 up  1:13,  0 users,  load average: 1.12, 0.95, 0.67
	Linux multinode-938028 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [cd830225b55d96cccae58e173b32199f8e7f8df5b09499d90c256f45cb13e85c] <==
	* I0817 21:30:07.725566       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0817 21:30:07.725846       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0817 21:30:07.727536       1 main.go:116] setting mtu 1500 for CNI 
	I0817 21:30:07.727568       1 main.go:146] kindnetd IP family: "ipv4"
	I0817 21:30:07.727593       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0817 21:30:08.123359       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0817 21:30:08.123385       1 main.go:227] handling current node
	I0817 21:30:18.129006       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0817 21:30:18.129035       1 main.go:227] handling current node
	I0817 21:30:28.141134       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0817 21:30:28.141158       1 main.go:227] handling current node
	I0817 21:30:28.141167       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0817 21:30:28.141172       1 main.go:250] Node multinode-938028-m02 has CIDR [10.244.1.0/24] 
	I0817 21:30:28.141312       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [e9114519839bfbdd073eca3b60934c4db1770a1e64845d20ba657beb6585754a] <==
	* I0817 21:29:50.938344       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0817 21:29:50.938374       1 cache.go:39] Caches are synced for autoregister controller
	I0817 21:29:50.941575       1 controller.go:624] quota admission added evaluator for: namespaces
	I0817 21:29:51.022655       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0817 21:29:51.029945       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0817 21:29:51.030032       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0817 21:29:51.031224       1 shared_informer.go:318] Caches are synced for configmaps
	I0817 21:29:51.031526       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0817 21:29:51.122902       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0817 21:29:51.618751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0817 21:29:51.835519       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0817 21:29:51.839071       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0817 21:29:51.839091       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0817 21:29:52.202938       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0817 21:29:52.232978       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0817 21:29:52.345868       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0817 21:29:52.351979       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0817 21:29:52.352927       1 controller.go:624] quota admission added evaluator for: endpoints
	I0817 21:29:52.356680       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0817 21:29:52.936035       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0817 21:29:53.918560       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0817 21:29:53.931614       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0817 21:29:53.939211       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0817 21:30:06.569921       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0817 21:30:06.831877       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [70d909f4e3514704b8c49ef5eba18963448282b52b59b24862e3de8400943d2d] <==
	* I0817 21:30:06.724150       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-938028" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:30:06.729715       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-938028" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:30:06.730996       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-938028" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:30:06.731104       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-938028" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0817 21:30:06.769985       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0817 21:30:06.779002       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:30:06.822762       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-lp9ct"
	I0817 21:30:06.823031       1 shared_informer.go:318] Caches are synced for resource quota
	I0817 21:30:06.841784       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bf5b5"
	I0817 21:30:06.843080       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qm6gj"
	I0817 21:30:07.134492       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:30:07.134534       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0817 21:30:07.222115       1 shared_informer.go:318] Caches are synced for garbage collector
	I0817 21:30:11.667789       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0817 21:30:26.541961       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-938028-m02\" does not exist"
	I0817 21:30:26.548137       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-938028-m02" podCIDRs=[10.244.1.0/24]
	I0817 21:30:26.551135       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w8k8m"
	I0817 21:30:26.554417       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9hrr6"
	I0817 21:30:26.670005       1 event.go:307] "Event occurred" object="multinode-938028-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-938028-m02 event: Registered Node multinode-938028-m02 in Controller"
	I0817 21:30:26.670060       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-938028-m02"
	W0817 21:30:27.987237       1 topologycache.go:232] Can't get CPU or zone information for multinode-938028-m02 node
	I0817 21:30:30.431111       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0817 21:30:30.437960       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-khspl"
	I0817 21:30:30.441072       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-b9qpl"
	I0817 21:30:31.678824       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-khspl" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-khspl"
	
	* 
	* ==> kube-proxy [50a4e347742c34a9a67b08444bce8b3a6cddf1f1f58cd63d6ebda29a5bfdfd13] <==
	* I0817 21:30:07.754513       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0817 21:30:07.754615       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0817 21:30:07.754639       1 server_others.go:554] "Using iptables proxy"
	I0817 21:30:07.771930       1 server_others.go:192] "Using iptables Proxier"
	I0817 21:30:07.771963       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0817 21:30:07.771975       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0817 21:30:07.771990       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0817 21:30:07.772027       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0817 21:30:07.772663       1 server.go:658] "Version info" version="v1.27.4"
	I0817 21:30:07.772729       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0817 21:30:07.773359       1 config.go:97] "Starting endpoint slice config controller"
	I0817 21:30:07.773443       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0817 21:30:07.773387       1 config.go:315] "Starting node config controller"
	I0817 21:30:07.773539       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0817 21:30:07.773404       1 config.go:188] "Starting service config controller"
	I0817 21:30:07.773576       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0817 21:30:07.874874       1 shared_informer.go:318] Caches are synced for service config
	I0817 21:30:07.874917       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0817 21:30:07.874939       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [16abeca7f5f1d0bbea11649599eb6c124eec808e3d04cd4a53a25d4a0613113b] <==
	* E0817 21:29:51.026776       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0817 21:29:51.026726       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:29:51.026832       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0817 21:29:51.026759       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:29:51.026669       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:29:51.026862       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0817 21:29:51.026869       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0817 21:29:51.026984       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0817 21:29:51.026665       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0817 21:29:51.027013       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0817 21:29:51.027251       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0817 21:29:51.027284       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0817 21:29:51.846264       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0817 21:29:51.846295       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0817 21:29:51.913573       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0817 21:29:51.913623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0817 21:29:51.970660       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0817 21:29:51.970699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0817 21:29:51.982101       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0817 21:29:51.982143       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0817 21:29:52.052490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0817 21:29:52.052521       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0817 21:29:52.058797       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0817 21:29:52.058828       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0817 21:29:52.422476       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 17 21:30:06 multinode-938028 kubelet[1587]: I0817 21:30:06.924053    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39b3791d-3973-4cb6-ac55-eecde2f2fd0f-lib-modules\") pod \"kube-proxy-bf5b5\" (UID: \"39b3791d-3973-4cb6-ac55-eecde2f2fd0f\") " pod="kube-system/kube-proxy-bf5b5"
	Aug 17 21:30:06 multinode-938028 kubelet[1587]: I0817 21:30:06.924088    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39b3791d-3973-4cb6-ac55-eecde2f2fd0f-xtables-lock\") pod \"kube-proxy-bf5b5\" (UID: \"39b3791d-3973-4cb6-ac55-eecde2f2fd0f\") " pod="kube-system/kube-proxy-bf5b5"
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: I0817 21:30:07.025014    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b0d01f7-ea47-41a7-9b63-9ca0e667333d-xtables-lock\") pod \"kindnet-qm6gj\" (UID: \"5b0d01f7-ea47-41a7-9b63-9ca0e667333d\") " pod="kube-system/kindnet-qm6gj"
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: I0817 21:30:07.025070    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65ms\" (UniqueName: \"kubernetes.io/projected/5b0d01f7-ea47-41a7-9b63-9ca0e667333d-kube-api-access-c65ms\") pod \"kindnet-qm6gj\" (UID: \"5b0d01f7-ea47-41a7-9b63-9ca0e667333d\") " pod="kube-system/kindnet-qm6gj"
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: I0817 21:30:07.025157    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5b0d01f7-ea47-41a7-9b63-9ca0e667333d-cni-cfg\") pod \"kindnet-qm6gj\" (UID: \"5b0d01f7-ea47-41a7-9b63-9ca0e667333d\") " pod="kube-system/kindnet-qm6gj"
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: I0817 21:30:07.025185    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b0d01f7-ea47-41a7-9b63-9ca0e667333d-lib-modules\") pod \"kindnet-qm6gj\" (UID: \"5b0d01f7-ea47-41a7-9b63-9ca0e667333d\") " pod="kube-system/kindnet-qm6gj"
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: W0817 21:30:07.242712    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio-cd8c76513f80ba880b487eec1d5e0b282a9f15db48ad8c3fc32d65a9c74a6557 WatchSource:0}: Error finding container cd8c76513f80ba880b487eec1d5e0b282a9f15db48ad8c3fc32d65a9c74a6557: Status 404 returned error can't find the container with id cd8c76513f80ba880b487eec1d5e0b282a9f15db48ad8c3fc32d65a9c74a6557
	Aug 17 21:30:07 multinode-938028 kubelet[1587]: W0817 21:30:07.242967    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio-115fe3af3fcb57dd4911015d146278a638c302ef34718412233155b7236da978 WatchSource:0}: Error finding container 115fe3af3fcb57dd4911015d146278a638c302ef34718412233155b7236da978: Status 404 returned error can't find the container with id 115fe3af3fcb57dd4911015d146278a638c302ef34718412233155b7236da978
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.065577    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bf5b5" podStartSLOduration=2.065530004 podCreationTimestamp="2023-08-17 21:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:30:08.065376024 +0000 UTC m=+14.169207750" watchObservedRunningTime="2023-08-17 21:30:08.065530004 +0000 UTC m=+14.169361732"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.074047    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-qm6gj" podStartSLOduration=2.074005723 podCreationTimestamp="2023-08-17 21:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:30:08.07388882 +0000 UTC m=+14.177720546" watchObservedRunningTime="2023-08-17 21:30:08.074005723 +0000 UTC m=+14.177837451"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.458369    1587 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.478421    1587 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.634239    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qxs\" (UniqueName: \"kubernetes.io/projected/9cb10fd3-6480-47a0-8698-0573bb8dbfd1-kube-api-access-j4qxs\") pod \"coredns-5d78c9869d-klmz7\" (UID: \"9cb10fd3-6480-47a0-8698-0573bb8dbfd1\") " pod="kube-system/coredns-5d78c9869d-klmz7"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: I0817 21:30:08.634307    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cb10fd3-6480-47a0-8698-0573bb8dbfd1-config-volume\") pod \"coredns-5d78c9869d-klmz7\" (UID: \"9cb10fd3-6480-47a0-8698-0573bb8dbfd1\") " pod="kube-system/coredns-5d78c9869d-klmz7"
	Aug 17 21:30:08 multinode-938028 kubelet[1587]: W0817 21:30:08.838554    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio-7ca38b15f6db6f0923196a9eb9daeff8bd939aa0a425d29717298477195f6b50 WatchSource:0}: Error finding container 7ca38b15f6db6f0923196a9eb9daeff8bd939aa0a425d29717298477195f6b50: Status 404 returned error can't find the container with id 7ca38b15f6db6f0923196a9eb9daeff8bd939aa0a425d29717298477195f6b50
	Aug 17 21:30:09 multinode-938028 kubelet[1587]: I0817 21:30:09.070385    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-klmz7" podStartSLOduration=3.070342134 podCreationTimestamp="2023-08-17 21:30:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:30:09.070041684 +0000 UTC m=+15.173873431" watchObservedRunningTime="2023-08-17 21:30:09.070342134 +0000 UTC m=+15.174173860"
	Aug 17 21:30:09 multinode-938028 kubelet[1587]: I0817 21:30:09.939241    1587 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:30:10 multinode-938028 kubelet[1587]: I0817 21:30:10.042918    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2717746b-904a-44d5-82f6-301899f718aa-tmp\") pod \"storage-provisioner\" (UID: \"2717746b-904a-44d5-82f6-301899f718aa\") " pod="kube-system/storage-provisioner"
	Aug 17 21:30:10 multinode-938028 kubelet[1587]: I0817 21:30:10.042977    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298gg\" (UniqueName: \"kubernetes.io/projected/2717746b-904a-44d5-82f6-301899f718aa-kube-api-access-298gg\") pod \"storage-provisioner\" (UID: \"2717746b-904a-44d5-82f6-301899f718aa\") " pod="kube-system/storage-provisioner"
	Aug 17 21:30:10 multinode-938028 kubelet[1587]: W0817 21:30:10.270539    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio-04481b0a60dab9810106056e198a8fce686d94f954e67626fb7d574b0ae7391f WatchSource:0}: Error finding container 04481b0a60dab9810106056e198a8fce686d94f954e67626fb7d574b0ae7391f: Status 404 returned error can't find the container with id 04481b0a60dab9810106056e198a8fce686d94f954e67626fb7d574b0ae7391f
	Aug 17 21:30:11 multinode-938028 kubelet[1587]: I0817 21:30:11.073650    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.073615817 podCreationTimestamp="2023-08-17 21:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-17 21:30:11.073589021 +0000 UTC m=+17.177420748" watchObservedRunningTime="2023-08-17 21:30:11.073615817 +0000 UTC m=+17.177447544"
	Aug 17 21:30:30 multinode-938028 kubelet[1587]: I0817 21:30:30.447344    1587 topology_manager.go:212] "Topology Admit Handler"
	Aug 17 21:30:30 multinode-938028 kubelet[1587]: I0817 21:30:30.549021    1587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldzs\" (UniqueName: \"kubernetes.io/projected/0db905a2-6d6c-413b-ac9c-ed80d6ac087d-kube-api-access-xldzs\") pod \"busybox-67b7f59bb-b9qpl\" (UID: \"0db905a2-6d6c-413b-ac9c-ed80d6ac087d\") " pod="default/busybox-67b7f59bb-b9qpl"
	Aug 17 21:30:30 multinode-938028 kubelet[1587]: W0817 21:30:30.806733    1587 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio-9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a WatchSource:0}: Error finding container 9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a: Status 404 returned error can't find the container with id 9870676a3721be3b1d5dc106cf99c32766efa02f5ae5ac89ed364c48a945e47a
	Aug 17 21:30:32 multinode-938028 kubelet[1587]: I0817 21:30:32.111923    1587 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-b9qpl" podStartSLOduration=1.522296335 podCreationTimestamp="2023-08-17 21:30:30 +0000 UTC" firstStartedPulling="2023-08-17 21:30:30.810532506 +0000 UTC m=+36.914364227" lastFinishedPulling="2023-08-17 21:30:31.400103239 +0000 UTC m=+37.503934948" observedRunningTime="2023-08-17 21:30:32.111736599 +0000 UTC m=+38.215568342" watchObservedRunningTime="2023-08-17 21:30:32.111867056 +0000 UTC m=+38.215698783"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-938028 -n multinode-938028
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-938028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.3710228972.exe start -p running-upgrade-537915 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.3710228972.exe start -p running-upgrade-537915 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.97326621s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-537915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-537915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.36386261s)

                                                
                                                
-- stdout --
	* [running-upgrade-537915] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-537915 in cluster running-upgrade-537915
	* Pulling base image ...
	* Updating the running docker "running-upgrade-537915" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:42:02.172406  184303 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:42:02.172555  184303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:42:02.172563  184303 out.go:309] Setting ErrFile to fd 2...
	I0817 21:42:02.172568  184303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:42:02.172770  184303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:42:02.173327  184303 out.go:303] Setting JSON to false
	I0817 21:42:02.174909  184303 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5070,"bootTime":1692303452,"procs":620,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:42:02.174968  184303 start.go:138] virtualization: kvm guest
	I0817 21:42:02.178525  184303 out.go:177] * [running-upgrade-537915] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:42:02.180308  184303 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:42:02.181796  184303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:42:02.180387  184303 notify.go:220] Checking for updates...
	I0817 21:42:02.184982  184303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:42:02.186533  184303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:42:02.187940  184303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:42:02.189286  184303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:42:02.191023  184303 config.go:182] Loaded profile config "running-upgrade-537915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0817 21:42:02.191050  184303 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:42:02.193031  184303 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 21:42:02.194408  184303 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:42:02.219616  184303 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:42:02.219690  184303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:42:02.277277  184303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:true NGoroutines:95 SystemTime:2023-08-17 21:42:02.268415628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:42:02.277372  184303 docker.go:294] overlay module found
	I0817 21:42:02.279265  184303 out.go:177] * Using the docker driver based on existing profile
	I0817 21:42:02.280585  184303 start.go:298] selected driver: docker
	I0817 21:42:02.280594  184303 start.go:902] validating driver "docker" against &{Name:running-upgrade-537915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-537915 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:42:02.280683  184303 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:42:02.281440  184303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:42:02.343585  184303 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:true NGoroutines:95 SystemTime:2023-08-17 21:42:02.333417104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:42:02.343931  184303 cni.go:84] Creating CNI manager for ""
	I0817 21:42:02.343950  184303 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0817 21:42:02.343958  184303 start_flags.go:319] config:
	{Name:running-upgrade-537915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-537915 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:42:02.346032  184303 out.go:177] * Starting control plane node running-upgrade-537915 in cluster running-upgrade-537915
	I0817 21:42:02.347529  184303 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:42:02.349009  184303 out.go:177] * Pulling base image ...
	I0817 21:42:02.350378  184303 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0817 21:42:02.350482  184303 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:42:02.371348  184303 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:42:02.371378  184303 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0817 21:42:02.385238  184303 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0817 21:42:02.385415  184303 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/running-upgrade-537915/config.json ...
	I0817 21:42:02.385530  184303 cache.go:107] acquiring lock: {Name:mka28ae3f834cef859bd0f08bd4773dbe4a9f6ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385541  184303 cache.go:107] acquiring lock: {Name:mk5a833a21c949e3802b4f343325f5652c3e06f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385619  184303 cache.go:107] acquiring lock: {Name:mkf22040c5968825089b516855985cdf733fc231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385665  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0817 21:42:02.385686  184303 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:42:02.385684  184303 cache.go:107] acquiring lock: {Name:mk7a041a54b040052a5b246b36f718344ff8a8db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385694  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0817 21:42:02.385715  184303 start.go:365] acquiring machines lock for running-upgrade-537915: {Name:mk707089be194da27fc852989e8ae93f89caa0ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385649  184303 cache.go:107] acquiring lock: {Name:mk1508297ee795efd2a43f8ab4f7b4fe06fe7032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385736  184303 cache.go:107] acquiring lock: {Name:mk6deac267ae89cbc10d2cf3a9dadcabb949e0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385744  184303 cache.go:107] acquiring lock: {Name:mk5932deb6d20e639ba2681fe56b15445c3c4b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.385714  184303 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 98.474µs
	I0817 21:42:02.385778  184303 start.go:369] acquired machines lock for "running-upgrade-537915" in 48.695µs
	I0817 21:42:02.385780  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0817 21:42:02.385794  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0817 21:42:02.385796  184303 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:42:02.385685  184303 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 164.329µs
	I0817 21:42:02.385803  184303 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 67.379µs
	I0817 21:42:02.385812  184303 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0817 21:42:02.385814  184303 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0817 21:42:02.385791  184303 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 58.381µs
	I0817 21:42:02.385853  184303 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0817 21:42:02.385780  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0817 21:42:02.385887  184303 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 241.073µs
	I0817 21:42:02.385927  184303 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0817 21:42:02.385721  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0817 21:42:02.385937  184303 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 403.696µs
	I0817 21:42:02.385950  184303 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0817 21:42:02.385727  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0817 21:42:02.385959  184303 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 278.713µs
	I0817 21:42:02.385971  184303 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0817 21:42:02.385782  184303 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0817 21:42:02.385532  184303 cache.go:107] acquiring lock: {Name:mkdba23696bc8c66a1c8337799b34bbcd861dff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:42:02.386062  184303 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 21:42:02.386078  184303 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 558.299µs
	I0817 21:42:02.386087  184303 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 21:42:02.386094  184303 cache.go:87] Successfully saved all images to host disk.
	I0817 21:42:02.385802  184303 fix.go:54] fixHost starting: m01
	I0817 21:42:02.386398  184303 cli_runner.go:164] Run: docker container inspect running-upgrade-537915 --format={{.State.Status}}
	I0817 21:42:02.407157  184303 fix.go:102] recreateIfNeeded on running-upgrade-537915: state=Running err=<nil>
	W0817 21:42:02.407196  184303 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:42:02.409007  184303 out.go:177] * Updating the running docker "running-upgrade-537915" container ...
	I0817 21:42:02.410443  184303 machine.go:88] provisioning docker machine ...
	I0817 21:42:02.410471  184303 ubuntu.go:169] provisioning hostname "running-upgrade-537915"
	I0817 21:42:02.410536  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:02.433972  184303 main.go:141] libmachine: Using SSH client type: native
	I0817 21:42:02.434437  184303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I0817 21:42:02.434460  184303 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-537915 && echo "running-upgrade-537915" | sudo tee /etc/hostname
	I0817 21:42:02.554043  184303 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-537915
	
	I0817 21:42:02.554119  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:02.573132  184303 main.go:141] libmachine: Using SSH client type: native
	I0817 21:42:02.573536  184303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I0817 21:42:02.573563  184303 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-537915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-537915/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-537915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:42:02.681939  184303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:42:02.681970  184303 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:42:02.682006  184303 ubuntu.go:177] setting up certificates
	I0817 21:42:02.682016  184303 provision.go:83] configureAuth start
	I0817 21:42:02.682075  184303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-537915
	I0817 21:42:02.699685  184303 provision.go:138] copyHostCerts
	I0817 21:42:02.699737  184303 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem, removing ...
	I0817 21:42:02.699746  184303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:42:02.699802  184303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:42:02.699904  184303 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem, removing ...
	I0817 21:42:02.699916  184303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:42:02.699959  184303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:42:02.700046  184303 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem, removing ...
	I0817 21:42:02.700057  184303 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:42:02.700101  184303 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:42:02.700178  184303 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-537915 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-537915]
	I0817 21:42:03.007607  184303 provision.go:172] copyRemoteCerts
	I0817 21:42:03.007675  184303 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:42:03.007715  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.026022  184303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/running-upgrade-537915/id_rsa Username:docker}
	I0817 21:42:03.105702  184303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:42:03.125600  184303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:42:03.145544  184303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0817 21:42:03.168877  184303 provision.go:86] duration metric: configureAuth took 486.840145ms
	I0817 21:42:03.168904  184303 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:42:03.169053  184303 config.go:182] Loaded profile config "running-upgrade-537915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0817 21:42:03.169144  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.191823  184303 main.go:141] libmachine: Using SSH client type: native
	I0817 21:42:03.192440  184303 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I0817 21:42:03.192462  184303 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:42:03.631521  184303 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:42:03.631544  184303 machine.go:91] provisioned docker machine in 1.221084653s
	I0817 21:42:03.631554  184303 start.go:300] post-start starting for "running-upgrade-537915" (driver="docker")
	I0817 21:42:03.631563  184303 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:42:03.631614  184303 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:42:03.631651  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.652622  184303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/running-upgrade-537915/id_rsa Username:docker}
	I0817 21:42:03.739385  184303 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:42:03.742728  184303 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:42:03.742761  184303 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:42:03.742770  184303 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:42:03.742775  184303 info.go:137] Remote host: Ubuntu 19.10
	I0817 21:42:03.742783  184303 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:42:03.742829  184303 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:42:03.742897  184303 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> 175042.pem in /etc/ssl/certs
	I0817 21:42:03.742993  184303 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:42:03.752048  184303 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:42:03.774481  184303 start.go:303] post-start completed in 142.914111ms
	I0817 21:42:03.774554  184303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:42:03.774603  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.796844  184303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/running-upgrade-537915/id_rsa Username:docker}
	I0817 21:42:03.878791  184303 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:42:03.882542  184303 fix.go:56] fixHost completed within 1.496735804s
	I0817 21:42:03.882560  184303 start.go:83] releasing machines lock for "running-upgrade-537915", held for 1.496772917s
	I0817 21:42:03.882610  184303 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-537915
	I0817 21:42:03.899067  184303 ssh_runner.go:195] Run: cat /version.json
	I0817 21:42:03.899106  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.899134  184303 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:42:03.899203  184303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-537915
	I0817 21:42:03.915355  184303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/running-upgrade-537915/id_rsa Username:docker}
	I0817 21:42:03.923185  184303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/running-upgrade-537915/id_rsa Username:docker}
	W0817 21:42:03.997029  184303 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 21:42:03.997112  184303 ssh_runner.go:195] Run: systemctl --version
	I0817 21:42:04.032475  184303 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:42:04.082791  184303 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:42:04.086843  184303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:42:04.102021  184303 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:42:04.102080  184303 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:42:04.126425  184303 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:42:04.126445  184303 start.go:466] detecting cgroup driver to use...
	I0817 21:42:04.126482  184303 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:42:04.126524  184303 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:42:04.152730  184303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:42:04.165630  184303 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:42:04.165697  184303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:42:04.177827  184303 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:42:04.188535  184303 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 21:42:04.197004  184303 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 21:42:04.197053  184303 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:42:04.290011  184303 docker.go:212] disabling docker service ...
	I0817 21:42:04.290071  184303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:42:04.299117  184303 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:42:04.307728  184303 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:42:04.380876  184303 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:42:04.453179  184303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:42:04.464439  184303 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:42:04.477574  184303 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0817 21:42:04.477630  184303 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:42:04.486912  184303 out.go:177] 
	W0817 21:42:04.488266  184303 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0817 21:42:04.488280  184303 out.go:239] * 
	* 
	W0817 21:42:04.489112  184303 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:42:04.490752  184303 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-537915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-17 21:42:04.506702018 +0000 UTC m=+1905.325507923
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-537915
helpers_test.go:235: (dbg) docker inspect running-upgrade-537915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234",
	        "Created": "2023-08-17T21:40:56.645995683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-17T21:40:59.033672102Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234/hosts",
	        "LogPath": "/var/lib/docker/containers/fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234/fc45cf1ea824e65000f8fc64208cbf610ef2aa0dc76123be0193c5105f0e8234-json.log",
	        "Name": "/running-upgrade-537915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-537915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3daa5092b845b0d4e0e5d6d6a9e660484e3c65cbf1318b67b3399c76265f8957-init/diff:/var/lib/docker/overlay2/57bd12b2f1a325c05d379271583c3fe17759b80508e7bdb6890a755ed92433de/diff:/var/lib/docker/overlay2/28887843bbf3c166d82caa3e606d55aba5f4e917e8cd655c61c6ba4da45c7885/diff:/var/lib/docker/overlay2/69471472f064a0693fe07828d4c5437bf1a5caa3845a997a240bc658c6e274ee/diff:/var/lib/docker/overlay2/04e4350e0e9fde5f15091ac8a8314a4fe46838703eef04eb8a1f12ef3445779c/diff:/var/lib/docker/overlay2/83f9996b4ea55fa9940668b01dde278ab46bcb77e5eb8de01e3750be5d45bac6/diff:/var/lib/docker/overlay2/cad95c9042633bb656acf351ac8367d5eca08e9de3cdd8f5014f7b7db1fc1389/diff:/var/lib/docker/overlay2/8f0141062b6f91cbb1f252ae955134a8dff3a56c1e082c6de7412dd676a9139e/diff:/var/lib/docker/overlay2/d614f01e05abaf6f2d7c192fe3ac2ef4de04ae12030da82b9665535357937ae7/diff:/var/lib/docker/overlay2/ae67c9c21a6cd2864837e5ca2af60119793275ced464104c8b740b83988e28b4/diff:/var/lib/docker/overlay2/ee82db
5cdcd8e2061b38c2557f5f6e63be3fc516f4051eb834ad869d3519a4da/diff:/var/lib/docker/overlay2/7be3b1871234ab33443b16095f57425d2ab173441d6bdefd109ada0ef74b4e35/diff:/var/lib/docker/overlay2/c6da3973283c90cfec308bd8efd426de77617fe7d6fb865bf3757b6b9352997d/diff:/var/lib/docker/overlay2/79b831e96fd9a440c55970b09ca86d4f7b5bb1e093bd0004ba8d7a43685a3105/diff:/var/lib/docker/overlay2/c58955ba958b4aad43cbdf3f59861587ebe071c69c97dd6ac638081e6e10121f/diff:/var/lib/docker/overlay2/fd30e02a65f474aaa97400a2d4ca27cb32b93848ad7fe1453e3dea49b7871259/diff:/var/lib/docker/overlay2/a7aa758fe008b61535f172f4538ff95396707b218ea745edb899f2ad4b893b78/diff:/var/lib/docker/overlay2/3d8b275222a34fad28dd582ee7223f07ec87cfcb8148808749216c9928f34752/diff:/var/lib/docker/overlay2/8cdddee70d1b3f99227473e988756636a2784e884b31133c8c2a88d1ec109691/diff:/var/lib/docker/overlay2/7040e632cbf91e994dcf4d64d5d6f2329df092b6e4720bf90f5071cd1fda7bf6/diff:/var/lib/docker/overlay2/46c308d86df73019274e84caced222b9eb1e449043337cdd95b3b84f5d747661/diff:/var/lib/d
ocker/overlay2/b4690beb32a4543fb28f16957b0e0e2d14fcd823689242bd9a2be58886935d79/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3daa5092b845b0d4e0e5d6d6a9e660484e3c65cbf1318b67b3399c76265f8957/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3daa5092b845b0d4e0e5d6d6a9e660484e3c65cbf1318b67b3399c76265f8957/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3daa5092b845b0d4e0e5d6d6a9e660484e3c65cbf1318b67b3399c76265f8957/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-537915",
	                "Source": "/var/lib/docker/volumes/running-upgrade-537915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-537915",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-537915",
	                "name.minikube.sigs.k8s.io": "running-upgrade-537915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ff06948836d78de33f729a8ed226d6fc880b96f020ea8ef62ff0cec3aef119c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ff06948836d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e31d48c72558ea6e046297209fa39f5056498f14cfe92013acb5f585ccd2c08c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "d7a173d9b1d1bccf25cf75d7c58a7e1ba0ac1143180ca832d818b8158dc323c8",
	                    "EndpointID": "e31d48c72558ea6e046297209fa39f5056498f14cfe92013acb5f585ccd2c08c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-537915 -n running-upgrade-537915
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-537915 -n running-upgrade-537915: exit status 4 (293.954488ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:42:04.789576  185124 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-537915" does not appear in /home/jenkins/minikube-integration/16865-10716/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-537915" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-537915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-537915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-537915: (2.741787338s)
--- FAIL: TestRunningBinaryUpgrade (71.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3743869866.exe start -p stopped-upgrade-165125 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.3743869866.exe start -p stopped-upgrade-165125 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m23.767889816s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.3743869866.exe -p stopped-upgrade-165125 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.3743869866.exe -p stopped-upgrade-165125 stop: (11.013301332s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-165125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-165125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.796458373s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-165125] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-165125 in cluster stopped-upgrade-165125
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-165125" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:41:25.128475  173160 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:41:25.128623  173160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:41:25.128629  173160 out.go:309] Setting ErrFile to fd 2...
	I0817 21:41:25.128636  173160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:41:25.128969  173160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:41:25.129696  173160 out.go:303] Setting JSON to false
	I0817 21:41:25.131216  173160 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5033,"bootTime":1692303452,"procs":555,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:41:25.131298  173160 start.go:138] virtualization: kvm guest
	I0817 21:41:25.134002  173160 out.go:177] * [stopped-upgrade-165125] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:41:25.136082  173160 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:41:25.136220  173160 notify.go:220] Checking for updates...
	I0817 21:41:25.138433  173160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:41:25.139779  173160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:41:25.141058  173160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:41:25.142391  173160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:41:25.143801  173160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:41:25.146057  173160 config.go:182] Loaded profile config "stopped-upgrade-165125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0817 21:41:25.146106  173160 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0817 21:41:25.148432  173160 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0817 21:41:25.149973  173160 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:41:25.179235  173160 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:41:25.179314  173160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:41:25.251659  173160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2023-08-17 21:41:25.241186558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:41:25.251751  173160 docker.go:294] overlay module found
	I0817 21:41:25.253714  173160 out.go:177] * Using the docker driver based on existing profile
	I0817 21:41:25.255191  173160 start.go:298] selected driver: docker
	I0817 21:41:25.255214  173160 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-165125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-165125 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:41:25.255341  173160 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:41:25.256294  173160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:41:25.322260  173160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:79 SystemTime:2023-08-17 21:41:25.311736974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:41:25.322592  173160 cni.go:84] Creating CNI manager for ""
	I0817 21:41:25.322613  173160 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0817 21:41:25.322623  173160 start_flags.go:319] config:
	{Name:stopped-upgrade-165125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-165125 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:41:25.324840  173160 out.go:177] * Starting control plane node stopped-upgrade-165125 in cluster stopped-upgrade-165125
	I0817 21:41:25.326290  173160 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:41:25.327724  173160 out.go:177] * Pulling base image ...
	I0817 21:41:25.329081  173160 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0817 21:41:25.329168  173160 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:41:25.347655  173160 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0817 21:41:25.347681  173160 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0817 21:41:25.368844  173160 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0817 21:41:25.369005  173160 cache.go:107] acquiring lock: {Name:mkdba23696bc8c66a1c8337799b34bbcd861dff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369043  173160 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/stopped-upgrade-165125/config.json ...
	I0817 21:41:25.369027  173160 cache.go:107] acquiring lock: {Name:mkf22040c5968825089b516855985cdf733fc231 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369079  173160 cache.go:107] acquiring lock: {Name:mk7a041a54b040052a5b246b36f718344ff8a8db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369103  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0817 21:41:25.369116  173160 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.234µs
	I0817 21:41:25.369141  173160 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0817 21:41:25.369166  173160 cache.go:107] acquiring lock: {Name:mka28ae3f834cef859bd0f08bd4773dbe4a9f6ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369216  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0817 21:41:25.369222  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0817 21:41:25.369212  173160 cache.go:107] acquiring lock: {Name:mk5a833a21c949e3802b4f343325f5652c3e06f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369228  173160 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 214.62µs
	I0817 21:41:25.369240  173160 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0817 21:41:25.369231  173160 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 167.884µs
	I0817 21:41:25.369246  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0817 21:41:25.369250  173160 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0817 21:41:25.369220  173160 cache.go:107] acquiring lock: {Name:mk5932deb6d20e639ba2681fe56b15445c3c4b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369255  173160 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 96.097µs
	I0817 21:41:25.369266  173160 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0817 21:41:25.369257  173160 cache.go:107] acquiring lock: {Name:mk1508297ee795efd2a43f8ab4f7b4fe06fe7032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369278  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0817 21:41:25.369290  173160 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 110.745µs
	I0817 21:41:25.369300  173160 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0817 21:41:25.369319  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0817 21:41:25.369329  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0817 21:41:25.369334  173160 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 157.334µs
	I0817 21:41:25.369358  173160 cache.go:195] Successfully downloaded all kic artifacts
	I0817 21:41:25.369367  173160 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0817 21:41:25.369337  173160 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 265.849µs
	I0817 21:41:25.369378  173160 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0817 21:41:25.369388  173160 start.go:365] acquiring machines lock for stopped-upgrade-165125: {Name:mk046dd3c210e72f878d2ce5bb200ec37f54ee93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369476  173160 start.go:369] acquired machines lock for "stopped-upgrade-165125" in 71.92µs
	I0817 21:41:25.369499  173160 start.go:96] Skipping create...Using existing machine configuration
	I0817 21:41:25.369510  173160 fix.go:54] fixHost starting: m01
	I0817 21:41:25.369523  173160 cache.go:107] acquiring lock: {Name:mk6deac267ae89cbc10d2cf3a9dadcabb949e0ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0817 21:41:25.369654  173160 cache.go:115] /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0817 21:41:25.369667  173160 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 181.847µs
	I0817 21:41:25.369681  173160 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0817 21:41:25.369692  173160 cache.go:87] Successfully saved all images to host disk.
	I0817 21:41:25.369786  173160 cli_runner.go:164] Run: docker container inspect stopped-upgrade-165125 --format={{.State.Status}}
	I0817 21:41:25.386672  173160 fix.go:102] recreateIfNeeded on stopped-upgrade-165125: state=Stopped err=<nil>
	W0817 21:41:25.386705  173160 fix.go:128] unexpected machine state, will restart: <nil>
	I0817 21:41:25.388792  173160 out.go:177] * Restarting existing docker container for "stopped-upgrade-165125" ...
	I0817 21:41:25.390314  173160 cli_runner.go:164] Run: docker start stopped-upgrade-165125
	I0817 21:41:25.671463  173160 cli_runner.go:164] Run: docker container inspect stopped-upgrade-165125 --format={{.State.Status}}
	I0817 21:41:25.689012  173160 kic.go:426] container "stopped-upgrade-165125" state is running.
	I0817 21:41:25.692413  173160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-165125
	I0817 21:41:25.710849  173160 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/stopped-upgrade-165125/config.json ...
	I0817 21:41:25.711041  173160 machine.go:88] provisioning docker machine ...
	I0817 21:41:25.711069  173160 ubuntu.go:169] provisioning hostname "stopped-upgrade-165125"
	I0817 21:41:25.711119  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:25.731230  173160 main.go:141] libmachine: Using SSH client type: native
	I0817 21:41:25.731691  173160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0817 21:41:25.731709  173160 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-165125 && echo "stopped-upgrade-165125" | sudo tee /etc/hostname
	I0817 21:41:25.732302  173160 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33120->127.0.0.1:32949: read: connection reset by peer
	I0817 21:41:28.845892  173160 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-165125
	
	I0817 21:41:28.846001  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:28.862830  173160 main.go:141] libmachine: Using SSH client type: native
	I0817 21:41:28.863521  173160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0817 21:41:28.863551  173160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-165125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-165125/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-165125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0817 21:41:28.969799  173160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0817 21:41:28.969825  173160 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16865-10716/.minikube CaCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16865-10716/.minikube}
	I0817 21:41:28.969848  173160 ubuntu.go:177] setting up certificates
	I0817 21:41:28.969859  173160 provision.go:83] configureAuth start
	I0817 21:41:28.969943  173160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-165125
	I0817 21:41:28.987306  173160 provision.go:138] copyHostCerts
	I0817 21:41:28.987356  173160 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem, removing ...
	I0817 21:41:28.987365  173160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem
	I0817 21:41:28.987425  173160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/ca.pem (1078 bytes)
	I0817 21:41:28.987512  173160 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem, removing ...
	I0817 21:41:28.987520  173160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem
	I0817 21:41:28.987545  173160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/cert.pem (1123 bytes)
	I0817 21:41:28.987592  173160 exec_runner.go:144] found /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem, removing ...
	I0817 21:41:28.987599  173160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem
	I0817 21:41:28.987621  173160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16865-10716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16865-10716/.minikube/key.pem (1679 bytes)
	I0817 21:41:28.987661  173160 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-165125 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-165125]
	I0817 21:41:29.182530  173160 provision.go:172] copyRemoteCerts
	I0817 21:41:29.182591  173160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0817 21:41:29.182638  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:29.201306  173160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/stopped-upgrade-165125/id_rsa Username:docker}
	I0817 21:41:29.284770  173160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0817 21:41:29.301665  173160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0817 21:41:29.318596  173160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0817 21:41:29.340822  173160 provision.go:86] duration metric: configureAuth took 370.948793ms
	I0817 21:41:29.340870  173160 ubuntu.go:193] setting minikube options for container-runtime
	I0817 21:41:29.341073  173160 config.go:182] Loaded profile config "stopped-upgrade-165125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0817 21:41:29.341200  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:29.363725  173160 main.go:141] libmachine: Using SSH client type: native
	I0817 21:41:29.364391  173160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80f680] 0x812720 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0817 21:41:29.364420  173160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0817 21:41:30.033335  173160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0817 21:41:30.033360  173160 machine.go:91] provisioned docker machine in 4.322303358s
	I0817 21:41:30.033372  173160 start.go:300] post-start starting for "stopped-upgrade-165125" (driver="docker")
	I0817 21:41:30.033383  173160 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0817 21:41:30.033449  173160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0817 21:41:30.033493  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:30.050609  173160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/stopped-upgrade-165125/id_rsa Username:docker}
	I0817 21:41:30.134230  173160 ssh_runner.go:195] Run: cat /etc/os-release
	I0817 21:41:30.137331  173160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0817 21:41:30.137362  173160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0817 21:41:30.137376  173160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0817 21:41:30.137383  173160 info.go:137] Remote host: Ubuntu 19.10
	I0817 21:41:30.137392  173160 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/addons for local assets ...
	I0817 21:41:30.137446  173160 filesync.go:126] Scanning /home/jenkins/minikube-integration/16865-10716/.minikube/files for local assets ...
	I0817 21:41:30.137512  173160 filesync.go:149] local asset: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem -> 175042.pem in /etc/ssl/certs
	I0817 21:41:30.137588  173160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0817 21:41:30.144762  173160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/ssl/certs/175042.pem --> /etc/ssl/certs/175042.pem (1708 bytes)
	I0817 21:41:30.162422  173160 start.go:303] post-start completed in 129.035844ms
	I0817 21:41:30.162495  173160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:41:30.162539  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:30.179914  173160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/stopped-upgrade-165125/id_rsa Username:docker}
	I0817 21:41:30.258650  173160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0817 21:41:30.262661  173160 fix.go:56] fixHost completed within 4.893147785s
	I0817 21:41:30.262688  173160 start.go:83] releasing machines lock for "stopped-upgrade-165125", held for 4.893195576s
	I0817 21:41:30.262758  173160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-165125
	I0817 21:41:30.280856  173160 ssh_runner.go:195] Run: cat /version.json
	I0817 21:41:30.280882  173160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0817 21:41:30.280924  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:30.280953  173160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-165125
	I0817 21:41:30.300548  173160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/stopped-upgrade-165125/id_rsa Username:docker}
	I0817 21:41:30.305457  173160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/stopped-upgrade-165125/id_rsa Username:docker}
	W0817 21:41:30.381447  173160 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0817 21:41:30.381537  173160 ssh_runner.go:195] Run: systemctl --version
	I0817 21:41:30.436619  173160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0817 21:41:30.492396  173160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0817 21:41:30.497127  173160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:41:30.512660  173160 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0817 21:41:30.512736  173160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0817 21:41:30.537172  173160 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0817 21:41:30.537192  173160 start.go:466] detecting cgroup driver to use...
	I0817 21:41:30.537220  173160 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0817 21:41:30.537259  173160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0817 21:41:30.558608  173160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0817 21:41:30.567918  173160 docker.go:196] disabling cri-docker service (if available) ...
	I0817 21:41:30.567983  173160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0817 21:41:30.577557  173160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0817 21:41:30.588942  173160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0817 21:41:30.599476  173160 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0817 21:41:30.599532  173160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0817 21:41:30.672726  173160 docker.go:212] disabling docker service ...
	I0817 21:41:30.672788  173160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0817 21:41:30.681963  173160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0817 21:41:30.691778  173160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0817 21:41:30.757886  173160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0817 21:41:30.832531  173160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0817 21:41:30.842658  173160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0817 21:41:30.855504  173160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0817 21:41:30.855557  173160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0817 21:41:30.865242  173160 out.go:177] 
	W0817 21:41:30.866831  173160 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0817 21:41:30.866849  173160 out.go:239] * 
	* 
	W0817 21:41:30.867861  173160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0817 21:41:30.869616  173160 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-165125 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (100.58s)

                                                
                                    

Test pass (277/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.79
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.4/json-events 5.61
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.06
17 TestDownloadOnly/v1.28.0-rc.1/json-events 12.51
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.05
23 TestDownloadOnly/DeleteAll 0.18
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
25 TestDownloadOnlyKic 1.16
26 TestBinaryMirror 0.69
27 TestOffline 86.2
29 TestAddons/Setup 121.82
31 TestAddons/parallel/Registry 14.28
33 TestAddons/parallel/InspektorGadget 10.8
34 TestAddons/parallel/MetricsServer 5.88
35 TestAddons/parallel/HelmTiller 11.19
37 TestAddons/parallel/CSI 47.34
38 TestAddons/parallel/Headlamp 13.26
39 TestAddons/parallel/CloudSpanner 5.79
42 TestAddons/serial/GCPAuth/Namespaces 0.11
43 TestAddons/StoppedEnableDisable 12.08
44 TestCertOptions 25.12
45 TestCertExpiration 235.86
47 TestForceSystemdFlag 26.46
48 TestForceSystemdEnv 28.18
50 TestKVMDriverInstallOrUpdate 3.4
54 TestErrorSpam/setup 23
55 TestErrorSpam/start 0.58
56 TestErrorSpam/status 0.82
57 TestErrorSpam/pause 1.45
58 TestErrorSpam/unpause 1.44
59 TestErrorSpam/stop 1.34
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 69.57
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 28.85
66 TestFunctional/serial/KubeContext 0.04
67 TestFunctional/serial/KubectlGetPods 0.07
70 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
71 TestFunctional/serial/CacheCmd/cache/add_local 1.15
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
73 TestFunctional/serial/CacheCmd/cache/list 0.04
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
75 TestFunctional/serial/CacheCmd/cache/cache_reload 1.45
76 TestFunctional/serial/CacheCmd/cache/delete 0.09
77 TestFunctional/serial/MinikubeKubectlCmd 0.1
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
79 TestFunctional/serial/ExtraConfig 32.59
80 TestFunctional/serial/ComponentHealth 0.06
81 TestFunctional/serial/LogsCmd 1.3
82 TestFunctional/serial/LogsFileCmd 1.28
83 TestFunctional/serial/InvalidService 3.98
85 TestFunctional/parallel/ConfigCmd 0.33
86 TestFunctional/parallel/DashboardCmd 10.49
87 TestFunctional/parallel/DryRun 0.38
88 TestFunctional/parallel/InternationalLanguage 0.41
89 TestFunctional/parallel/StatusCmd 1
93 TestFunctional/parallel/ServiceCmdConnect 8.67
94 TestFunctional/parallel/AddonsCmd 0.12
95 TestFunctional/parallel/PersistentVolumeClaim 30.95
97 TestFunctional/parallel/SSHCmd 0.63
98 TestFunctional/parallel/CpCmd 1.3
99 TestFunctional/parallel/MySQL 21.05
100 TestFunctional/parallel/FileSync 0.26
101 TestFunctional/parallel/CertSync 1.72
105 TestFunctional/parallel/NodeLabels 0.08
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
109 TestFunctional/parallel/License 0.17
110 TestFunctional/parallel/ServiceCmd/DeployApp 9.28
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.43
116 TestFunctional/parallel/ServiceCmd/List 0.56
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
119 TestFunctional/parallel/ServiceCmd/Format 0.42
120 TestFunctional/parallel/ServiceCmd/URL 0.35
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.49
124 TestFunctional/parallel/MountCmd/any-port 7.21
125 TestFunctional/parallel/ProfileCmd/profile_list 0.32
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
131 TestFunctional/parallel/ImageCommands/ImageBuild 1.72
132 TestFunctional/parallel/ImageCommands/Setup 0.89
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.28
140 TestFunctional/parallel/MountCmd/specific-port 1.79
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.15
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.21
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.2
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.53
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.87
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestIngressAddonLegacy/StartLegacyK8sCluster 80.67
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.29
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
164 TestJSONOutput/start/Command 68.75
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.63
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.59
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.68
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.19
189 TestKicCustomNetwork/create_custom_network 34.62
190 TestKicCustomNetwork/use_default_bridge_network 23.93
191 TestKicExistingNetwork 25.34
192 TestKicCustomSubnet 26.92
193 TestKicStaticIP 27.07
194 TestMainNoArgs 0.04
195 TestMinikubeProfile 52.17
198 TestMountStart/serial/StartWithMountFirst 5.55
199 TestMountStart/serial/VerifyMountFirst 0.23
200 TestMountStart/serial/StartWithMountSecond 8.09
201 TestMountStart/serial/VerifyMountSecond 0.23
202 TestMountStart/serial/DeleteFirst 1.61
203 TestMountStart/serial/VerifyMountPostDelete 0.23
204 TestMountStart/serial/Stop 1.17
205 TestMountStart/serial/RestartStopped 7.01
206 TestMountStart/serial/VerifyMountPostStop 0.23
209 TestMultiNode/serial/FreshStart2Nodes 55.37
210 TestMultiNode/serial/DeployApp2Nodes 3.38
212 TestMultiNode/serial/AddNode 18.27
213 TestMultiNode/serial/ProfileList 0.26
214 TestMultiNode/serial/CopyFile 8.54
215 TestMultiNode/serial/StopNode 2.07
216 TestMultiNode/serial/StartAfterStop 11.12
217 TestMultiNode/serial/RestartKeepsNodes 116.02
218 TestMultiNode/serial/DeleteNode 4.58
219 TestMultiNode/serial/StopMultiNode 23.75
220 TestMultiNode/serial/RestartMultiNode 72.04
221 TestMultiNode/serial/ValidateNameConflict 23.2
226 TestPreload 157.77
228 TestScheduledStopUnix 100.04
231 TestInsufficientStorage 10.07
234 TestKubernetesUpgrade 361.69
235 TestMissingContainerUpgrade 154.12
236 TestStoppedBinaryUpgrade/Setup 0.71
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
239 TestNoKubernetes/serial/StartWithK8s 33.84
241 TestNoKubernetes/serial/StartWithStopK8s 9.04
242 TestNoKubernetes/serial/Start 9.7
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
244 TestNoKubernetes/serial/ProfileList 1.37
245 TestNoKubernetes/serial/Stop 1.75
246 TestNoKubernetes/serial/StartNoArgs 8.09
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
256 TestPause/serial/Start 73.36
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.56
265 TestNetworkPlugins/group/false 3.71
266 TestPause/serial/SecondStartNoReconfiguration 27.89
270 TestPause/serial/Pause 0.87
271 TestPause/serial/VerifyStatus 0.3
272 TestPause/serial/Unpause 0.68
273 TestPause/serial/PauseAgain 0.78
274 TestPause/serial/DeletePaused 2.66
275 TestPause/serial/VerifyDeletedResources 14.93
277 TestStartStop/group/old-k8s-version/serial/FirstStart 128.71
279 TestStartStop/group/no-preload/serial/FirstStart 73.48
280 TestStartStop/group/no-preload/serial/DeployApp 8.35
281 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
282 TestStartStop/group/no-preload/serial/Stop 11.89
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
284 TestStartStop/group/no-preload/serial/SecondStart 335.72
285 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
286 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
287 TestStartStop/group/old-k8s-version/serial/Stop 11.92
288 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
289 TestStartStop/group/old-k8s-version/serial/SecondStart 418.78
291 TestStartStop/group/embed-certs/serial/FirstStart 68.84
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.22
294 TestStartStop/group/embed-certs/serial/DeployApp 8.72
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
296 TestStartStop/group/embed-certs/serial/Stop 11.96
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
298 TestStartStop/group/embed-certs/serial/SecondStart 336.04
299 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.39
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
301 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 347.56
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.02
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
307 TestStartStop/group/no-preload/serial/Pause 2.65
309 TestStartStop/group/newest-cni/serial/FirstStart 38.2
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
312 TestStartStop/group/newest-cni/serial/Stop 1.22
313 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
314 TestStartStop/group/newest-cni/serial/SecondStart 26.31
315 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
318 TestStartStop/group/newest-cni/serial/Pause 2.48
319 TestNetworkPlugins/group/auto/Start 69.07
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/old-k8s-version/serial/Pause 2.58
324 TestNetworkPlugins/group/kindnet/Start 72.07
325 TestNetworkPlugins/group/auto/KubeletFlags 0.3
326 TestNetworkPlugins/group/auto/NetCatPod 9.32
327 TestNetworkPlugins/group/auto/DNS 0.17
328 TestNetworkPlugins/group/auto/Localhost 0.14
329 TestNetworkPlugins/group/auto/HairPin 0.13
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestNetworkPlugins/group/calico/Start 63.15
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
334 TestStartStop/group/embed-certs/serial/Pause 2.92
335 TestNetworkPlugins/group/custom-flannel/Start 64.03
336 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
338 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
341 TestNetworkPlugins/group/kindnet/DNS 0.19
342 TestNetworkPlugins/group/kindnet/Localhost 0.17
343 TestNetworkPlugins/group/kindnet/HairPin 0.18
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.81
346 TestNetworkPlugins/group/enable-default-cni/Start 80.16
347 TestNetworkPlugins/group/flannel/Start 56.67
348 TestNetworkPlugins/group/calico/ControllerPod 5.02
349 TestNetworkPlugins/group/calico/KubeletFlags 0.29
350 TestNetworkPlugins/group/calico/NetCatPod 10.41
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.32
353 TestNetworkPlugins/group/calico/DNS 0.17
354 TestNetworkPlugins/group/calico/Localhost 0.17
355 TestNetworkPlugins/group/calico/HairPin 0.14
356 TestNetworkPlugins/group/custom-flannel/DNS 0.19
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
359 TestNetworkPlugins/group/bridge/Start 34.55
360 TestNetworkPlugins/group/flannel/ControllerPod 5.02
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
362 TestNetworkPlugins/group/flannel/NetCatPod 10.28
363 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
364 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
365 TestNetworkPlugins/group/flannel/DNS 0.21
366 TestNetworkPlugins/group/flannel/Localhost 0.16
367 TestNetworkPlugins/group/flannel/HairPin 0.17
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
369 TestNetworkPlugins/group/bridge/NetCatPod 9.32
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
373 TestNetworkPlugins/group/bridge/DNS 0.17
374 TestNetworkPlugins/group/bridge/Localhost 0.15
375 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (11.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.785398722s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-538116
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-538116: exit status 85 (55.642108ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:19.247319   17516 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:19.247519   17516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:19.247529   17516 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:19.247533   17516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:19.247734   17516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	W0817 21:10:19.247840   17516 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: no such file or directory
	I0817 21:10:19.248367   17516 out.go:303] Setting JSON to true
	I0817 21:10:19.249248   17516 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3167,"bootTime":1692303452,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:19.249301   17516 start.go:138] virtualization: kvm guest
	I0817 21:10:19.251617   17516 out.go:97] [download-only-538116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:19.253142   17516 out.go:169] MINIKUBE_LOCATION=16865
	W0817 21:10:19.251709   17516 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball: no such file or directory
	I0817 21:10:19.251741   17516 notify.go:220] Checking for updates...
	I0817 21:10:19.255656   17516 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:19.257042   17516 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:10:19.258324   17516 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:10:19.259674   17516 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0817 21:10:19.261860   17516 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:19.262080   17516 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:10:19.281969   17516 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:10:19.282030   17516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:19.611456   17516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-17 21:10:19.603610139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:19.611552   17516 docker.go:294] overlay module found
	I0817 21:10:19.613396   17516 out.go:97] Using the docker driver based on user configuration
	I0817 21:10:19.613412   17516 start.go:298] selected driver: docker
	I0817 21:10:19.613416   17516 start.go:902] validating driver "docker" against <nil>
	I0817 21:10:19.613488   17516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:19.664790   17516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-17 21:10:19.656738044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:19.664938   17516 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0817 21:10:19.665388   17516 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0817 21:10:19.665532   17516 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0817 21:10:19.667540   17516 out.go:169] Using Docker driver with root privileges
	I0817 21:10:19.668825   17516 cni.go:84] Creating CNI manager for ""
	I0817 21:10:19.668836   17516 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:10:19.668844   17516 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0817 21:10:19.668865   17516 start_flags.go:319] config:
	{Name:download-only-538116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-538116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:19.670195   17516 out.go:97] Starting control plane node download-only-538116 in cluster download-only-538116
	I0817 21:10:19.670208   17516 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:10:19.671390   17516 out.go:97] Pulling base image ...
	I0817 21:10:19.671411   17516 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 21:10:19.671461   17516 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:10:19.686334   17516 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:19.686485   17516 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:10:19.686561   17516 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:19.702791   17516 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:19.702815   17516 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:19.702914   17516 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 21:10:19.704743   17516 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0817 21:10:19.704753   17516 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:19.740361   17516 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:22.461625   17516 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:10:24.552479   17516 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:24.552582   17516 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:25.413976   17516 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0817 21:10:25.414276   17516 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/download-only-538116/config.json ...
	I0817 21:10:25.414303   17516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/download-only-538116/config.json: {Name:mk31a6fc218d7392cd67ab466178c1018df4f921 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0817 21:10:25.414463   17516 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0817 21:10:25.414618   17516 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-538116"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (5.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.609117207s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (5.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-538116
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-538116: exit status 85 (54.295364ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:31.092410   17673 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:31.092522   17673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:31.092532   17673 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:31.092538   17673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:31.092749   17673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	W0817 21:10:31.092877   17673 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: no such file or directory
	I0817 21:10:31.093283   17673 out.go:303] Setting JSON to true
	I0817 21:10:31.094134   17673 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3179,"bootTime":1692303452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:31.094197   17673 start.go:138] virtualization: kvm guest
	I0817 21:10:31.096468   17673 out.go:97] [download-only-538116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:31.098124   17673 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:10:31.096600   17673 notify.go:220] Checking for updates...
	I0817 21:10:31.101354   17673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:31.103067   17673 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:10:31.104517   17673 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:10:31.105850   17673 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0817 21:10:31.108748   17673 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:31.109170   17673 config.go:182] Loaded profile config "download-only-538116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0817 21:10:31.109225   17673 start.go:810] api.Load failed for download-only-538116: filestore "download-only-538116": Docker machine "download-only-538116" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:31.109315   17673 driver.go:373] Setting default libvirt URI to qemu:///system
	W0817 21:10:31.109359   17673 start.go:810] api.Load failed for download-only-538116: filestore "download-only-538116": Docker machine "download-only-538116" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:31.129269   17673 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:10:31.129335   17673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:31.180237   17673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:31.171905991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:31.180335   17673 docker.go:294] overlay module found
	I0817 21:10:31.182231   17673 out.go:97] Using the docker driver based on existing profile
	I0817 21:10:31.182249   17673 start.go:298] selected driver: docker
	I0817 21:10:31.182254   17673 start.go:902] validating driver "docker" against &{Name:download-only-538116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-538116 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:31.182392   17673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:31.230109   17673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:31.22262895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:31.230748   17673 cni.go:84] Creating CNI manager for ""
	I0817 21:10:31.230767   17673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:10:31.230776   17673 start_flags.go:319] config:
	{Name:download-only-538116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-538116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:31.232650   17673 out.go:97] Starting control plane node download-only-538116 in cluster download-only-538116
	I0817 21:10:31.232665   17673 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:10:31.234033   17673 out.go:97] Pulling base image ...
	I0817 21:10:31.234056   17673 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:10:31.234157   17673 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:10:31.250332   17673 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:31.250467   17673 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:10:31.250487   17673 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:10:31.250493   17673 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:10:31.250501   17673 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:10:31.264454   17673 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:31.264494   17673 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:31.264615   17673 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:10:31.266459   17673 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0817 21:10:31.266477   17673 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:31.302201   17673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:8fb3cf29e31ee2994fdad70ff1ffc061 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:34.746214   17673 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:34.746308   17673 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:35.597722   17673 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0817 21:10:35.597853   17673 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/download-only-538116/config.json ...
	I0817 21:10:35.598059   17673 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0817 21:10:35.598233   17673 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/linux/amd64/v1.27.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-538116"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (12.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-538116 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.510661284s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (12.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-538116
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-538116: exit status 85 (54.079452ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-538116 | jenkins | v1.31.2 | 17 Aug 23 21:10 UTC |          |
	|         | -p download-only-538116           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/17 21:10:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0817 21:10:36.757345   17817 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:10:36.757443   17817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:36.757453   17817 out.go:309] Setting ErrFile to fd 2...
	I0817 21:10:36.757459   17817 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:10:36.757621   17817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	W0817 21:10:36.757717   17817 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: open /home/jenkins/minikube-integration/16865-10716/.minikube/config/config.json: no such file or directory
	I0817 21:10:36.758131   17817 out.go:303] Setting JSON to true
	I0817 21:10:36.758827   17817 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3185,"bootTime":1692303452,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:10:36.758878   17817 start.go:138] virtualization: kvm guest
	I0817 21:10:36.760880   17817 out.go:97] [download-only-538116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:10:36.762297   17817 out.go:169] MINIKUBE_LOCATION=16865
	I0817 21:10:36.761054   17817 notify.go:220] Checking for updates...
	I0817 21:10:36.764854   17817 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:10:36.766016   17817 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:10:36.767130   17817 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:10:36.768471   17817 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0817 21:10:36.770994   17817 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0817 21:10:36.771591   17817 config.go:182] Loaded profile config "download-only-538116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0817 21:10:36.771634   17817 start.go:810] api.Load failed for download-only-538116: filestore "download-only-538116": Docker machine "download-only-538116" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:36.771739   17817 driver.go:373] Setting default libvirt URI to qemu:///system
	W0817 21:10:36.771784   17817 start.go:810] api.Load failed for download-only-538116: filestore "download-only-538116": Docker machine "download-only-538116" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0817 21:10:36.792726   17817 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:10:36.792832   17817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:36.841183   17817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:36.833506447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:36.841269   17817 docker.go:294] overlay module found
	I0817 21:10:36.842986   17817 out.go:97] Using the docker driver based on existing profile
	I0817 21:10:36.843012   17817 start.go:298] selected driver: docker
	I0817 21:10:36.843025   17817 start.go:902] validating driver "docker" against &{Name:download-only-538116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-538116 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:36.843161   17817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:10:36.891663   17817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-17 21:10:36.883939499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:10:36.892825   17817 cni.go:84] Creating CNI manager for ""
	I0817 21:10:36.893058   17817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0817 21:10:36.893076   17817 start_flags.go:319] config:
	{Name:download-only-538116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-538116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:10:36.894933   17817 out.go:97] Starting control plane node download-only-538116 in cluster download-only-538116
	I0817 21:10:36.894951   17817 cache.go:122] Beginning downloading kic base image for docker with crio
	I0817 21:10:36.896092   17817 out.go:97] Pulling base image ...
	I0817 21:10:36.896125   17817 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:36.896213   17817 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0817 21:10:36.910351   17817 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0817 21:10:36.910441   17817 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0817 21:10:36.910459   17817 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0817 21:10:36.910467   17817 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0817 21:10:36.910481   17817 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0817 21:10:36.927464   17817 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:36.927480   17817 cache.go:57] Caching tarball of preloaded images
	I0817 21:10:36.927605   17817 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:36.929727   17817 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0817 21:10:36.929751   17817 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:36.959241   17817 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:bb8ba69c7dfa450cc0765c8991e48fa2 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0817 21:10:41.426907   17817 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:41.427022   17817 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16865-10716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0817 21:10:42.316403   17817 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0817 21:10:42.316563   17817 profile.go:148] Saving config to /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/download-only-538116/config.json ...
	I0817 21:10:42.316832   17817 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0817 21:10:42.317091   17817 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16865-10716/.minikube/cache/linux/amd64/v1.28.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-538116"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-538116
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.16s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-218206 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-218206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-218206
--- PASS: TestDownloadOnlyKic (1.16s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-960837 --alsologtostderr --binary-mirror http://127.0.0.1:40255 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-960837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-960837
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestOffline (86.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-141315 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-141315 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m22.986686242s)
helpers_test.go:175: Cleaning up "offline-crio-141315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-141315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-141315: (3.21536744s)
--- PASS: TestOffline (86.20s)

                                                
                                    
x
+
TestAddons/Setup (121.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-418182 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-418182 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m1.823484855s)
--- PASS: TestAddons/Setup (121.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 16.101979ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jvrmk" [9aab1115-3b3c-44fc-a53c-2c86008dc60c] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01570974s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xx5bc" [ff6fee0b-74e6-4311-a81d-8123bc66f740] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017415255s
addons_test.go:316: (dbg) Run:  kubectl --context addons-418182 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-418182 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-418182 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.452790145s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 ip
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r28tv" [5f279417-1c11-482d-8096-550dd7184128] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.080528177s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-418182
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-418182: (5.715776962s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 16.483601ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-hxxz9" [17f8de9b-b07e-447b-86c7-b4dbe0e78707] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01512161s
addons_test.go:391: (dbg) Run:  kubectl --context addons-418182 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 15.401785ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-jqhh9" [973f9c57-236d-465a-af89-11b2247a28eb] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01482071s
addons_test.go:449: (dbg) Run:  kubectl --context addons-418182 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-418182 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.506906073s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.208228ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.012645679s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [91c402fc-4723-4dc9-be33-3ad884307bb6] Pending
helpers_test.go:344: "task-pv-pod" [91c402fc-4723-4dc9-be33-3ad884307bb6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [91c402fc-4723-4dc9-be33-3ad884307bb6] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009587781s
addons_test.go:560: (dbg) Run:  kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-418182 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-418182 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-418182 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-418182 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-418182 delete pod task-pv-pod: (1.261567322s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-418182 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-418182 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-418182 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7f7a2fc7-8c26-42e6-97ec-556aa2ecab04] Pending
helpers_test.go:344: "task-pv-pod-restore" [7f7a2fc7-8c26-42e6-97ec-556aa2ecab04] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7f7a2fc7-8c26-42e6-97ec-556aa2ecab04] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007591312s
addons_test.go:602: (dbg) Run:  kubectl --context addons-418182 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-418182 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-418182 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-418182 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.550318921s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-418182 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-418182 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-418182 --alsologtostderr -v=1: (1.232533627s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-lng4q" [6d7a982c-4801-460a-ac60-985bc48b0d71] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-lng4q" [6d7a982c-4801-460a-ac60-985bc48b0d71] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-lng4q" [6d7a982c-4801-460a-ac60-985bc48b0d71] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.022698782s
--- PASS: TestAddons/parallel/Headlamp (13.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-wfxxm" [e1eef886-4105-43f3-8619-af132293d8dc] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009848895s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-418182
--- PASS: TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-418182 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-418182 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-418182
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-418182: (11.862728508s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-418182
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-418182
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-418182
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertOptions (25.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-274540 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-274540 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.561789919s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-274540 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-274540 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-274540 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-274540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-274540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-274540: (1.980538467s)
--- PASS: TestCertOptions (25.12s)

                                                
                                    
x
+
TestCertExpiration (235.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-503195 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-503195 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.845041916s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-503195 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-503195 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.078139967s)
helpers_test.go:175: Cleaning up "cert-expiration-503195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-503195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-503195: (1.935015825s)
--- PASS: TestCertExpiration (235.86s)

                                                
                                    
x
+
TestForceSystemdFlag (26.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-089883 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0817 21:42:53.379679   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:42:54.826738   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-089883 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.972198565s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-089883 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-089883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-089883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-089883: (2.246783079s)
--- PASS: TestForceSystemdFlag (26.46s)

                                                
                                    
x
+
TestForceSystemdEnv (28.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-592923 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-592923 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.788446347s)
helpers_test.go:175: Cleaning up "force-systemd-env-592923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-592923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-592923: (2.392212322s)
--- PASS: TestForceSystemdEnv (28.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.40s)

                                                
                                    
x
+
TestErrorSpam/setup (23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-082738 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-082738 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-082738 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-082738 --driver=docker  --container-runtime=crio: (22.995201571s)
--- PASS: TestErrorSpam/setup (23.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 stop: (1.179718473s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-082738 --log_dir /tmp/nospam-082738 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16865-10716/.minikube/files/etc/test/nested/copy/17504/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0817 21:17:53.380461   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.386173   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.396408   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.416658   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.456916   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.537179   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:53.697584   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:54.018147   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:54.659059   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:17:55.939506   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-702251 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.569032083s)
--- PASS: TestFunctional/serial/StartWithProxy (69.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --alsologtostderr -v=8
E0817 21:17:58.500421   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:18:03.620981   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:18:13.861484   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-702251 --alsologtostderr -v=8: (28.847785219s)
functional_test.go:659: soft start took 28.84846842s for "functional-702251" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-702251 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-702251 /tmp/TestFunctionalserialCacheCmdcacheadd_local2336423615/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache add minikube-local-cache-test:functional-702251
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache delete minikube-local-cache-test:functional-702251
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-702251
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (251.686089ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 kubectl -- --context functional-702251 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-702251 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0817 21:18:34.342220   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-702251 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.593521634s)
functional_test.go:757: restart took 32.593640189s for "functional-702251" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-702251 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 logs: (1.300352321s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 logs --file /tmp/TestFunctionalserialLogsFileCmd523417575/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 logs --file /tmp/TestFunctionalserialLogsFileCmd523417575/001/logs.txt: (1.283473879s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-702251 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-702251
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-702251: exit status 115 (302.860328ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31205 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-702251 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 config get cpus: exit status 14 (58.21455ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 config get cpus: exit status 14 (42.419087ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-702251 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-702251 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51474: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-702251 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (158.69504ms)

                                                
                                                
-- stdout --
	* [functional-702251] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:19:23.757970   50974 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:19:23.758088   50974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:23.758099   50974 out.go:309] Setting ErrFile to fd 2...
	I0817 21:19:23.758103   50974 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:23.758295   50974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:19:23.759096   50974 out.go:303] Setting JSON to false
	I0817 21:19:23.760604   50974 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3712,"bootTime":1692303452,"procs":711,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:19:23.760687   50974 start.go:138] virtualization: kvm guest
	I0817 21:19:23.763083   50974 out.go:177] * [functional-702251] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:19:23.764760   50974 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:19:23.764759   50974 notify.go:220] Checking for updates...
	I0817 21:19:23.766165   50974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:19:23.767634   50974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:19:23.768963   50974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:19:23.770210   50974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:19:23.772164   50974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:19:23.774238   50974 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:19:23.774869   50974 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:19:23.802399   50974 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:19:23.802524   50974 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:19:23.862825   50974 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:50 SystemTime:2023-08-17 21:19:23.85363679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:19:23.862970   50974 docker.go:294] overlay module found
	I0817 21:19:23.865566   50974 out.go:177] * Using the docker driver based on existing profile
	I0817 21:19:23.867155   50974 start.go:298] selected driver: docker
	I0817 21:19:23.867171   50974 start.go:902] validating driver "docker" against &{Name:functional-702251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-702251 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:19:23.867261   50974 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:19:23.869591   50974 out.go:177] 
	W0817 21:19:23.871077   50974 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0817 21:19:23.872515   50974 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-702251 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-702251 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (408.295377ms)

                                                
                                                
-- stdout --
	* [functional-702251] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:19:24.135607   51188 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:19:24.135759   51188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:24.135771   51188 out.go:309] Setting ErrFile to fd 2...
	I0817 21:19:24.135778   51188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:19:24.136076   51188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:19:24.136622   51188 out.go:303] Setting JSON to false
	I0817 21:19:24.138012   51188 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3712,"bootTime":1692303452,"procs":711,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:19:24.138070   51188 start.go:138] virtualization: kvm guest
	I0817 21:19:24.149954   51188 out.go:177] * [functional-702251] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0817 21:19:24.167137   51188 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:19:24.167145   51188 notify.go:220] Checking for updates...
	I0817 21:19:24.187940   51188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:19:24.195702   51188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:19:24.205322   51188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:19:24.229538   51188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:19:24.270716   51188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:19:24.322164   51188 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:19:24.322703   51188 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:19:24.346934   51188 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:19:24.347044   51188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:19:24.410149   51188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:50 SystemTime:2023-08-17 21:19:24.401866084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:19:24.410274   51188 docker.go:294] overlay module found
	I0817 21:19:24.444630   51188 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0817 21:19:24.447920   51188 start.go:298] selected driver: docker
	I0817 21:19:24.447948   51188 start.go:902] validating driver "docker" against &{Name:functional-702251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-702251 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0817 21:19:24.448074   51188 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:19:24.466421   51188 out.go:177] 
	W0817 21:19:24.467943   51188 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0817 21:19:24.495130   51188 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-702251 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-702251 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-cmdjl" [476cd178-b0d3-483f-8b2e-729a9032ae63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-cmdjl" [476cd178-b0d3-483f-8b2e-729a9032ae63] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010464953s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31265
functional_test.go:1674: http://192.168.49.2:31265: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-cmdjl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31265
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [014803ca-cb31-494e-b6a2-0aa7df9e43be] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015283157s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-702251 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-702251 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-702251 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-702251 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [066cfd1a-0215-4331-afbd-8d0ecf94295e] Pending
helpers_test.go:344: "sp-pod" [066cfd1a-0215-4331-afbd-8d0ecf94295e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [066cfd1a-0215-4331-afbd-8d0ecf94295e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008940462s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-702251 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-702251 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-702251 delete -f testdata/storage-provisioner/pod.yaml: (1.471972535s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-702251 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca08eb6e-5941-4726-a1e4-7d15cb014159] Pending
helpers_test.go:344: "sp-pod" [ca08eb6e-5941-4726-a1e4-7d15cb014159] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca08eb6e-5941-4726-a1e4-7d15cb014159] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.062283051s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-702251 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh -n functional-702251 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 cp functional-702251:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3891007605/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh -n functional-702251 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-702251 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2023/08/17 21:19:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-7db894d786-g9h56" [bc8c70db-663f-4e25-b2d7-7fe22f4ed798] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-g9h56" [bc8c70db-663f-4e25-b2d7-7fe22f4ed798] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.114609531s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-702251 exec mysql-7db894d786-g9h56 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-702251 exec mysql-7db894d786-g9h56 -- mysql -ppassword -e "show databases;": exit status 1 (142.46528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-702251 exec mysql-7db894d786-g9h56 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-702251 exec mysql-7db894d786-g9h56 -- mysql -ppassword -e "show databases;": exit status 1 (131.831106ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-702251 exec mysql-7db894d786-g9h56 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/17504/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /etc/test/nested/copy/17504/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/17504.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /etc/ssl/certs/17504.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/17504.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /usr/share/ca-certificates/17504.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/175042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /etc/ssl/certs/175042.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/175042.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /usr/share/ca-certificates/175042.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-702251 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh "sudo systemctl is-active docker": exit status 1 (250.330132ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh "sudo systemctl is-active containerd": exit status 1 (244.815352ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-702251 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-702251 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-9pfkt" [a0434057-c8bc-4baf-8066-4e4cf1a709f3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-9pfkt" [a0434057-c8bc-4baf-8066-4e4cf1a709f3] Running
E0817 21:19:15.302611   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.058375837s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 48239: os: process already finished
helpers_test.go:508: unable to kill pid 47916: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-702251 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [58b4ceb6-3782-46c6-ab0d-bcf20f6415be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [58b4ceb6-3782-46c6-ab0d-bcf20f6415be] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.010020932s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service list -o json
functional_test.go:1493: Took "509.395747ms" to run "out/minikube-linux-amd64 -p functional-702251 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30063
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30063
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdany-port1795657497/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692307161656787742" to /tmp/TestFunctionalparallelMountCmdany-port1795657497/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692307161656787742" to /tmp/TestFunctionalparallelMountCmdany-port1795657497/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692307161656787742" to /tmp/TestFunctionalparallelMountCmdany-port1795657497/001/test-1692307161656787742
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.045262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 17 21:19 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 17 21:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 17 21:19 test-1692307161656787742
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh cat /mount-9p/test-1692307161656787742
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-702251 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [edeec7a7-8a12-487d-b377-85b9204a5d92] Pending
helpers_test.go:344: "busybox-mount" [edeec7a7-8a12-487d-b377-85b9204a5d92] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [edeec7a7-8a12-487d-b377-85b9204a5d92] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [edeec7a7-8a12-487d-b377-85b9204a5d92] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.009637988s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-702251 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdany-port1795657497/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "279.204486ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "42.641449ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "254.963685ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "43.007063ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-702251 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-702251
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-702251 image ls --format short --alsologtostderr:
I0817 21:19:48.593245   55287 out.go:296] Setting OutFile to fd 1 ...
I0817 21:19:48.593512   55287 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.593565   55287 out.go:309] Setting ErrFile to fd 2...
I0817 21:19:48.593587   55287 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.593836   55287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
I0817 21:19:48.594483   55287 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.594662   55287 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.595129   55287 cli_runner.go:164] Run: docker container inspect functional-702251 --format={{.State.Status}}
I0817 21:19:48.612259   55287 ssh_runner.go:195] Run: systemctl --version
I0817 21:19:48.612316   55287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-702251
I0817 21:19:48.630739   55287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/functional-702251/id_rsa Username:docker}
I0817 21:19:48.717952   55287 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-702251 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | eea7b3dcba7ee | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.27.4            | 6848d7eda0341 | 72.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | alpine             | eaf194063ee28 | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-702251  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 98ef2570f3cde | 59.8MB |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.4            | f466468864b7a | 114MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/kube-apiserver          | v1.27.4            | e7972205b6614 | 122MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-702251 image ls --format table --alsologtostderr:
I0817 21:19:48.818506   55429 out.go:296] Setting OutFile to fd 1 ...
I0817 21:19:48.818600   55429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.818608   55429 out.go:309] Setting ErrFile to fd 2...
I0817 21:19:48.818612   55429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.818816   55429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
I0817 21:19:48.819342   55429 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.819438   55429 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.819788   55429 cli_runner.go:164] Run: docker container inspect functional-702251 --format={{.State.Status}}
I0817 21:19:48.839418   55429 ssh_runner.go:195] Run: systemctl --version
I0817 21:19:48.839462   55429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-702251
I0817 21:19:48.859340   55429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/functional-702251/id_rsa Username:docker}
I0817 21:19:48.954636   55429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-702251 image ls --format json --alsologtostderr:
[{"id":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"72714135"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a94
39c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sh
a256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"122078160"},{"id":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"113931062"},{"id":"6e38f40d628db3002f5617342c8872c
935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","repoDigests":["registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af","registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"59814710"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["re
gistry.k8s.io/pause:3.9"],"size":"750414"},{"id":"eaf194063ee287f60137b88326ed4d3a14ec62f20de06df6ff7f8b5ed9f1d08c","repoDigests":["docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a","docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44389671"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-702251"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28
.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f2
54c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a8b33c223cff19627dd3042e5a10a3a0","docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820092"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-702251 image ls --format json --alsologtostderr:
I0817 21:19:48.602762   55288 out.go:296] Setting OutFile to fd 1 ...
I0817 21:19:48.602861   55288 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.602868   55288 out.go:309] Setting ErrFile to fd 2...
I0817 21:19:48.602872   55288 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.603064   55288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
I0817 21:19:48.603615   55288 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.603705   55288 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.604043   55288 cli_runner.go:164] Run: docker container inspect functional-702251 --format={{.State.Status}}
I0817 21:19:48.622704   55288 ssh_runner.go:195] Run: systemctl --version
I0817 21:19:48.622738   55288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-702251
I0817 21:19:48.639690   55288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/functional-702251/id_rsa Username:docker}
I0817 21:19:48.725576   55288 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-702251 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "113931062"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: eaf194063ee287f60137b88326ed4d3a14ec62f20de06df6ff7f8b5ed9f1d08c
repoDigests:
- docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a
- docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385
repoTags:
- docker.io/library/nginx:alpine
size: "44389671"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
- registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "59814710"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "72714135"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:13d22ec63300e16014d4a42aed735207a8b33c223cff19627dd3042e5a10a3a0
- docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35
repoTags:
- docker.io/library/nginx:latest
size: "190820092"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-702251
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "122078160"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-702251 image ls --format yaml --alsologtostderr:
I0817 21:19:48.599216   55289 out.go:296] Setting OutFile to fd 1 ...
I0817 21:19:48.599335   55289 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.599346   55289 out.go:309] Setting ErrFile to fd 2...
I0817 21:19:48.599352   55289 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:48.599651   55289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
I0817 21:19:48.600446   55289 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.600593   55289 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:48.601134   55289 cli_runner.go:164] Run: docker container inspect functional-702251 --format={{.State.Status}}
I0817 21:19:48.617623   55289 ssh_runner.go:195] Run: systemctl --version
I0817 21:19:48.617658   55289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-702251
I0817 21:19:48.633681   55289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/functional-702251/id_rsa Username:docker}
I0817 21:19:48.721849   55289 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh pgrep buildkitd: exit status 1 (254.668784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image build -t localhost/my-image:functional-702251 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image build -t localhost/my-image:functional-702251 testdata/build --alsologtostderr: (1.265496295s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-702251 image build -t localhost/my-image:functional-702251 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d0d7009c0a3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-702251
--> ec41e86d473
Successfully tagged localhost/my-image:functional-702251
ec41e86d473fbb5baa47452d4e715820385c2d9424b734442174241a9e1eed73
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-702251 image build -t localhost/my-image:functional-702251 testdata/build --alsologtostderr:
I0817 21:19:49.053428   55553 out.go:296] Setting OutFile to fd 1 ...
I0817 21:19:49.053600   55553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:49.053611   55553 out.go:309] Setting ErrFile to fd 2...
I0817 21:19:49.053618   55553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0817 21:19:49.053834   55553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
I0817 21:19:49.054422   55553 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:49.054919   55553 config.go:182] Loaded profile config "functional-702251": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0817 21:19:49.055338   55553 cli_runner.go:164] Run: docker container inspect functional-702251 --format={{.State.Status}}
I0817 21:19:49.072487   55553 ssh_runner.go:195] Run: systemctl --version
I0817 21:19:49.072550   55553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-702251
I0817 21:19:49.088404   55553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/functional-702251/id_rsa Username:docker}
I0817 21:19:49.173994   55553 build_images.go:151] Building image from path: /tmp/build.3805812302.tar
I0817 21:19:49.174051   55553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0817 21:19:49.182004   55553 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3805812302.tar
I0817 21:19:49.184796   55553 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3805812302.tar: stat -c "%s %y" /var/lib/minikube/build/build.3805812302.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3805812302.tar': No such file or directory
I0817 21:19:49.184818   55553 ssh_runner.go:362] scp /tmp/build.3805812302.tar --> /var/lib/minikube/build/build.3805812302.tar (3072 bytes)
I0817 21:19:49.205573   55553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3805812302
I0817 21:19:49.212883   55553 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3805812302 -xf /var/lib/minikube/build/build.3805812302.tar
I0817 21:19:49.220387   55553 crio.go:297] Building image: /var/lib/minikube/build/build.3805812302
I0817 21:19:49.220470   55553 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-702251 /var/lib/minikube/build/build.3805812302 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0817 21:19:50.261508   55553 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-702251 /var/lib/minikube/build/build.3805812302 --cgroup-manager=cgroupfs: (1.040996815s)
I0817 21:19:50.261577   55553 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3805812302
I0817 21:19:50.269559   55553 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3805812302.tar
I0817 21:19:50.276904   55553 build_images.go:207] Built localhost/my-image:functional-702251 from /tmp/build.3805812302.tar
I0817 21:19:50.276936   55553 build_images.go:123] succeeded building to: functional-702251
I0817 21:19:50.276941   55553 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-702251
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-702251 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.59.252 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-702251 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr: (4.980287452s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdspecific-port1483381611/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.396718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdspecific-port1483381611/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-702251 ssh "sudo umount -f /mount-9p": exit status 1 (283.842705ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-702251 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdspecific-port1483381611/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr: (2.983220967s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-702251 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-702251 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3191877261/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-702251
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image load --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr: (6.06746358s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image save gcr.io/google-containers/addon-resizer:functional-702251 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image save gcr.io/google-containers/addon-resizer:functional-702251 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.198881234s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-702251 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.316653664s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-702251
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-702251 image save --daemon gcr.io/google-containers/addon-resizer:functional-702251 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-702251
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-702251
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-702251
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-702251
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-997484 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0817 21:20:37.222895   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-997484 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m20.670269251s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons enable ingress --alsologtostderr -v=5: (11.293966327s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-997484 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-348914 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0817 21:24:51.330801   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:25:32.291851   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-348914 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m8.75431825s)
--- PASS: TestJSONOutput/start/Command (68.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-348914 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-348914 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-348914 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-348914 --output=json --user=testUser: (5.683753295s)
--- PASS: TestJSONOutput/stop/Command (5.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-940676 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-940676 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.390487ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5e156e7-669a-40ab-840c-da858059ca03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-940676] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92e3824f-229a-452c-88c7-69a63b01e52e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16865"}}
	{"specversion":"1.0","id":"5e06cc59-b43d-49f8-b307-c61539d26d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7e4004cc-f261-4983-a7f9-8c372391eb40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig"}}
	{"specversion":"1.0","id":"eed93aa2-65cd-421d-8a2f-0f5957a84642","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube"}}
	{"specversion":"1.0","id":"36955589-3cef-4c1d-a759-41933a5dfac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6d880159-d799-4cfc-8a70-b3517dae1dc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"81f5cb1e-fe06-423e-97da-71c5b3fdddff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-940676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-940676
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-479244 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-479244 --network=: (32.58183706s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-479244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-479244
E0817 21:26:31.781884   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:31.787221   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:31.797484   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:31.817839   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:31.858151   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:31.938503   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:32.098937   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:32.419511   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:33.060416   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-479244: (2.023193034s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-054766 --network=bridge
E0817 21:26:34.341143   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:36.901676   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:42.022248   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:52.263247   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:26:54.212321   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-054766 --network=bridge: (22.091448465s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-054766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-054766
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-054766: (1.819536885s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.93s)

                                                
                                    
x
+
TestKicExistingNetwork (25.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-900045 --network=existing-network
E0817 21:27:12.743424   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-900045 --network=existing-network: (23.272970934s)
helpers_test.go:175: Cleaning up "existing-network-900045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-900045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-900045: (1.94023963s)
--- PASS: TestKicExistingNetwork (25.34s)

                                                
                                    
x
+
TestKicCustomSubnet (26.92s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-566370 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-566370 --subnet=192.168.60.0/24: (24.866665323s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-566370 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-566370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-566370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-566370: (2.036571537s)
--- PASS: TestKicCustomSubnet (26.92s)

                                                
                                    
x
+
TestKicStaticIP (27.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-256961 --static-ip=192.168.200.200
E0817 21:27:53.380247   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
E0817 21:27:53.703682   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-256961 --static-ip=192.168.200.200: (24.997335294s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-256961 ip
helpers_test.go:175: Cleaning up "static-ip-256961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-256961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-256961: (1.955238041s)
--- PASS: TestKicStaticIP (27.07s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-640444 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-640444 --driver=docker  --container-runtime=crio: (23.209188693s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-642924 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-642924 --driver=docker  --container-runtime=crio: (24.062980378s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-640444
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-642924
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-642924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-642924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-642924: (1.810194982s)
helpers_test.go:175: Cleaning up "first-640444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-640444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-640444: (2.158224171s)
--- PASS: TestMinikubeProfile (52.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-810049 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0817 21:29:10.367162   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-810049 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.553250335s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-810049 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-823424 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0817 21:29:15.624709   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-823424 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.092887372s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-823424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-810049 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-810049 --alsologtostderr -v=5: (1.610301965s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-823424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-823424
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-823424: (1.174606916s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-823424
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-823424: (6.011943092s)
--- PASS: TestMountStart/serial/RestartStopped (7.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-823424 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (55.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-938028 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0817 21:29:38.053949   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-938028 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.940864033s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (55.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-938028 -- rollout status deployment/busybox: (1.691385443s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-b9qpl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-938028 -- exec busybox-67b7f59bb-khspl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-938028 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-938028 -v 3 --alsologtostderr: (17.709218967s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.27s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp testdata/cp-test.txt multinode-938028:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3989578328/001/cp-test_multinode-938028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028:/home/docker/cp-test.txt multinode-938028-m02:/home/docker/cp-test_multinode-938028_multinode-938028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test_multinode-938028_multinode-938028-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028:/home/docker/cp-test.txt multinode-938028-m03:/home/docker/cp-test_multinode-938028_multinode-938028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test_multinode-938028_multinode-938028-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp testdata/cp-test.txt multinode-938028-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3989578328/001/cp-test_multinode-938028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m02:/home/docker/cp-test.txt multinode-938028:/home/docker/cp-test_multinode-938028-m02_multinode-938028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test_multinode-938028-m02_multinode-938028.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m02:/home/docker/cp-test.txt multinode-938028-m03:/home/docker/cp-test_multinode-938028-m02_multinode-938028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test_multinode-938028-m02_multinode-938028-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp testdata/cp-test.txt multinode-938028-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3989578328/001/cp-test_multinode-938028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m03:/home/docker/cp-test.txt multinode-938028:/home/docker/cp-test_multinode-938028-m03_multinode-938028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028 "sudo cat /home/docker/cp-test_multinode-938028-m03_multinode-938028.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 cp multinode-938028-m03:/home/docker/cp-test.txt multinode-938028-m02:/home/docker/cp-test_multinode-938028-m03_multinode-938028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 ssh -n multinode-938028-m02 "sudo cat /home/docker/cp-test_multinode-938028-m03_multinode-938028-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-938028 node stop m03: (1.191019782s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-938028 status: exit status 7 (444.819213ms)

                                                
                                                
-- stdout --
	multinode-938028
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-938028-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-938028-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr: exit status 7 (432.570332ms)

                                                
                                                
-- stdout --
	multinode-938028
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-938028-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-938028-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:31:05.199393  115008 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:31:05.199505  115008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:31:05.199514  115008 out.go:309] Setting ErrFile to fd 2...
	I0817 21:31:05.199518  115008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:31:05.199706  115008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:31:05.199866  115008 out.go:303] Setting JSON to false
	I0817 21:31:05.199890  115008 mustload.go:65] Loading cluster: multinode-938028
	I0817 21:31:05.199994  115008 notify.go:220] Checking for updates...
	I0817 21:31:05.200225  115008 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:31:05.200236  115008 status.go:255] checking status of multinode-938028 ...
	I0817 21:31:05.200566  115008 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:31:05.217806  115008 status.go:330] multinode-938028 host status = "Running" (err=<nil>)
	I0817 21:31:05.217826  115008 host.go:66] Checking if "multinode-938028" exists ...
	I0817 21:31:05.218100  115008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028
	I0817 21:31:05.233337  115008 host.go:66] Checking if "multinode-938028" exists ...
	I0817 21:31:05.233602  115008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:31:05.233646  115008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028
	I0817 21:31:05.250511  115008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028/id_rsa Username:docker}
	I0817 21:31:05.338628  115008 ssh_runner.go:195] Run: systemctl --version
	I0817 21:31:05.342356  115008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:31:05.351826  115008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:31:05.401348  115008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-08-17 21:31:05.393365538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:31:05.401867  115008 kubeconfig.go:92] found "multinode-938028" server: "https://192.168.58.2:8443"
	I0817 21:31:05.401891  115008 api_server.go:166] Checking apiserver status ...
	I0817 21:31:05.401948  115008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0817 21:31:05.411440  115008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	I0817 21:31:05.419283  115008 api_server.go:182] apiserver freezer: "11:freezer:/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio/crio-e9114519839bfbdd073eca3b60934c4db1770a1e64845d20ba657beb6585754a"
	I0817 21:31:05.419335  115008 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5ae5510f223cafb802b34b5efa574c24fd46098c1d7a1fa53350cbcba3370595/crio/crio-e9114519839bfbdd073eca3b60934c4db1770a1e64845d20ba657beb6585754a/freezer.state
	I0817 21:31:05.426393  115008 api_server.go:204] freezer state: "THAWED"
	I0817 21:31:05.426500  115008 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0817 21:31:05.430600  115008 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0817 21:31:05.430620  115008 status.go:421] multinode-938028 apiserver status = Running (err=<nil>)
	I0817 21:31:05.430638  115008 status.go:257] multinode-938028 status: &{Name:multinode-938028 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:31:05.430661  115008 status.go:255] checking status of multinode-938028-m02 ...
	I0817 21:31:05.430879  115008 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Status}}
	I0817 21:31:05.446635  115008 status.go:330] multinode-938028-m02 host status = "Running" (err=<nil>)
	I0817 21:31:05.446658  115008 host.go:66] Checking if "multinode-938028-m02" exists ...
	I0817 21:31:05.446875  115008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938028-m02
	I0817 21:31:05.462125  115008 host.go:66] Checking if "multinode-938028-m02" exists ...
	I0817 21:31:05.462384  115008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0817 21:31:05.462438  115008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938028-m02
	I0817 21:31:05.477073  115008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16865-10716/.minikube/machines/multinode-938028-m02/id_rsa Username:docker}
	I0817 21:31:05.566573  115008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0817 21:31:05.576753  115008 status.go:257] multinode-938028-m02 status: &{Name:multinode-938028-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:31:05.576794  115008 status.go:255] checking status of multinode-938028-m03 ...
	I0817 21:31:05.577030  115008 cli_runner.go:164] Run: docker container inspect multinode-938028-m03 --format={{.State.Status}}
	I0817 21:31:05.593652  115008 status.go:330] multinode-938028-m03 host status = "Stopped" (err=<nil>)
	I0817 21:31:05.593674  115008 status.go:343] host is not running, skipping remaining checks
	I0817 21:31:05.593679  115008 status.go:257] multinode-938028-m03 status: &{Name:multinode-938028-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-938028 node start m03 --alsologtostderr: (10.467644946s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-938028
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-938028
E0817 21:31:31.781453   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-938028: (24.82947985s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-938028 --wait=true -v=8 --alsologtostderr
E0817 21:31:59.465748   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
E0817 21:32:53.379764   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-938028 --wait=true -v=8 --alsologtostderr: (1m31.107878817s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-938028
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-938028 node delete m03: (4.029240374s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-938028 stop: (23.593857691s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-938028 status: exit status 7 (78.448267ms)

                                                
                                                
-- stdout --
	multinode-938028
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-938028-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr: exit status 7 (75.277204ms)

                                                
                                                
-- stdout --
	multinode-938028
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-938028-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:33:41.018675  125197 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:33:41.018821  125197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:33:41.018831  125197 out.go:309] Setting ErrFile to fd 2...
	I0817 21:33:41.018836  125197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:33:41.019036  125197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:33:41.019206  125197 out.go:303] Setting JSON to false
	I0817 21:33:41.019236  125197 mustload.go:65] Loading cluster: multinode-938028
	I0817 21:33:41.019331  125197 notify.go:220] Checking for updates...
	I0817 21:33:41.019616  125197 config.go:182] Loaded profile config "multinode-938028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:33:41.019631  125197 status.go:255] checking status of multinode-938028 ...
	I0817 21:33:41.020129  125197 cli_runner.go:164] Run: docker container inspect multinode-938028 --format={{.State.Status}}
	I0817 21:33:41.037219  125197 status.go:330] multinode-938028 host status = "Stopped" (err=<nil>)
	I0817 21:33:41.037243  125197 status.go:343] host is not running, skipping remaining checks
	I0817 21:33:41.037250  125197 status.go:257] multinode-938028 status: &{Name:multinode-938028 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0817 21:33:41.037278  125197 status.go:255] checking status of multinode-938028-m02 ...
	I0817 21:33:41.037601  125197 cli_runner.go:164] Run: docker container inspect multinode-938028-m02 --format={{.State.Status}}
	I0817 21:33:41.053333  125197 status.go:330] multinode-938028-m02 host status = "Stopped" (err=<nil>)
	I0817 21:33:41.053353  125197 status.go:343] host is not running, skipping remaining checks
	I0817 21:33:41.053359  125197 status.go:257] multinode-938028-m02 status: &{Name:multinode-938028-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (72.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-938028 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0817 21:34:10.366812   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
E0817 21:34:16.424661   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-938028 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.485128939s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-938028 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (72.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-938028
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-938028-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-938028-m02 --driver=docker  --container-runtime=crio: exit status 14 (59.746793ms)

                                                
                                                
-- stdout --
	* [multinode-938028-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-938028-m02' is duplicated with machine name 'multinode-938028-m02' in profile 'multinode-938028'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-938028-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-938028-m03 --driver=docker  --container-runtime=crio: (21.024145639s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-938028
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-938028: exit status 80 (251.369824ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-938028
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-938028-m03 already exists in multinode-938028-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-938028-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-938028-m03: (1.81933542s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.20s)

                                                
                                    
x
+
TestPreload (157.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-559331 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0817 21:36:31.781726   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-559331 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m19.529098632s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-559331 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-559331
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-559331: (5.694426084s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-559331 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0817 21:37:53.380565   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-559331 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m9.351693822s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-559331 image list
helpers_test.go:175: Cleaning up "test-preload-559331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-559331
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-559331: (2.235633332s)
--- PASS: TestPreload (157.77s)

                                                
                                    
x
+
TestScheduledStopUnix (100.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-030561 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-030561 --memory=2048 --driver=docker  --container-runtime=crio: (24.681724254s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030561 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-030561 -n scheduled-stop-030561
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030561 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030561 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030561 -n scheduled-stop-030561
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-030561
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-030561 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0817 21:39:10.366702   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-030561
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-030561: exit status 7 (57.90437ms)

                                                
                                                
-- stdout --
	scheduled-stop-030561
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030561 -n scheduled-stop-030561
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-030561 -n scheduled-stop-030561: exit status 7 (56.69414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-030561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-030561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-030561: (4.179166582s)
--- PASS: TestScheduledStopUnix (100.04s)

                                                
                                    
x
+
TestInsufficientStorage (10.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-616640 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-616640 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.775131078s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee9ea00a-3678-4a21-8abc-3dd7ad07605f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-616640] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"611bd57a-925e-44a4-be70-476de37d4283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16865"}}
	{"specversion":"1.0","id":"b6c663be-1661-4f40-a631-086cc2b209dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2401d43c-1e4d-4e00-bcb0-41a8bfbb19ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig"}}
	{"specversion":"1.0","id":"1546135f-ad22-45af-bab1-14580c964465","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube"}}
	{"specversion":"1.0","id":"d584615a-f63c-4e2d-acb4-f75680024798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"905078ae-1f5b-4acb-bf9a-30cde35ece52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"466c0631-7026-4a05-af56-1e9db691fd87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"886bfd33-2984-440c-a190-82b2ecf7904c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"47de331d-34d3-485b-a599-a3b7900c3889","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dbd2facf-8047-4bc6-a550-880ed5e96fe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"32d36946-f1fe-4cca-952e-fe94dfac9b0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-616640 in cluster insufficient-storage-616640","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7efd3104-43af-4b5e-9910-705185ae7e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9de17e28-c18e-4789-b1a4-d786658d086a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"55ad2956-41e6-4a18-aa52-6a7a0899279d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-616640 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-616640 --output=json --layout=cluster: exit status 7 (252.111609ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-616640","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-616640","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:39:47.528101  146894 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-616640" does not appear in /home/jenkins/minikube-integration/16865-10716/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-616640 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-616640 --output=json --layout=cluster: exit status 7 (245.740916ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-616640","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-616640","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0817 21:39:47.774492  146982 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-616640" does not appear in /home/jenkins/minikube-integration/16865-10716/kubeconfig
	E0817 21:39:47.783601  146982 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/insufficient-storage-616640/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-616640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-616640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-616640: (1.799469849s)
--- PASS: TestInsufficientStorage (10.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.094741689s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-109707
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-109707: (1.219469836s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-109707 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-109707 status --format={{.Host}}: exit status 7 (88.307622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.202599707s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-109707 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (86.8666ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-109707] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-109707
	    minikube start -p kubernetes-upgrade-109707 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1097072 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-109707 --kubernetes-version=v1.28.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-109707 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.82409974s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-109707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-109707
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-109707: (2.105965576s)
--- PASS: TestKubernetesUpgrade (361.69s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.857730451.exe start -p missing-upgrade-175857 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.857730451.exe start -p missing-upgrade-175857 --memory=2200 --driver=docker  --container-runtime=crio: (1m26.38117058s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-175857
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-175857
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-175857 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-175857 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.449183059s)
helpers_test.go:175: Cleaning up "missing-upgrade-175857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-175857
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-175857: (2.051210134s)
--- PASS: TestMissingContainerUpgrade (154.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (67.665691ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-162837] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-162837 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-162837 --driver=docker  --container-runtime=crio: (33.520859599s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-162837 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --driver=docker  --container-runtime=crio: (6.744741774s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-162837 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-162837 status -o json: exit status 2 (292.88271ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-162837","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-162837
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-162837: (2.005381548s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --driver=docker  --container-runtime=crio
E0817 21:40:33.414329   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-162837 --no-kubernetes --driver=docker  --container-runtime=crio: (9.704601781s)
--- PASS: TestNoKubernetes/serial/Start (9.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-162837 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-162837 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.647842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-162837
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-162837: (1.747273078s)
--- PASS: TestNoKubernetes/serial/Stop (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-162837 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-162837 --driver=docker  --container-runtime=crio: (8.089115847s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-162837 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-162837 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.873994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (73.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762508 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-762508 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m13.358518701s)
--- PASS: TestPause/serial/Start (73.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-165125
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-405473 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-405473 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.596781ms)

                                                
                                                
-- stdout --
	* [false-405473] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0817 21:42:27.354653  190867 out.go:296] Setting OutFile to fd 1 ...
	I0817 21:42:27.354801  190867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:42:27.354811  190867 out.go:309] Setting ErrFile to fd 2...
	I0817 21:42:27.354815  190867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0817 21:42:27.355029  190867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16865-10716/.minikube/bin
	I0817 21:42:27.355640  190867 out.go:303] Setting JSON to false
	I0817 21:42:27.357247  190867 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5096,"bootTime":1692303452,"procs":779,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0817 21:42:27.357317  190867 start.go:138] virtualization: kvm guest
	I0817 21:42:27.360837  190867 out.go:177] * [false-405473] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0817 21:42:27.362394  190867 out.go:177]   - MINIKUBE_LOCATION=16865
	I0817 21:42:27.364004  190867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0817 21:42:27.362416  190867 notify.go:220] Checking for updates...
	I0817 21:42:27.365541  190867 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16865-10716/kubeconfig
	I0817 21:42:27.367079  190867 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16865-10716/.minikube
	I0817 21:42:27.368846  190867 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0817 21:42:27.370423  190867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0817 21:42:27.373422  190867 config.go:182] Loaded profile config "force-systemd-env-592923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:42:27.373547  190867 config.go:182] Loaded profile config "kubernetes-upgrade-109707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0817 21:42:27.373695  190867 config.go:182] Loaded profile config "pause-762508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0817 21:42:27.373795  190867 driver.go:373] Setting default libvirt URI to qemu:///system
	I0817 21:42:27.398947  190867 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0817 21:42:27.399066  190867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0817 21:42:27.463823  190867 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-17 21:42:27.45288403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0817 21:42:27.463989  190867 docker.go:294] overlay module found
	I0817 21:42:27.465822  190867 out.go:177] * Using the docker driver based on user configuration
	I0817 21:42:27.467417  190867 start.go:298] selected driver: docker
	I0817 21:42:27.467441  190867 start.go:902] validating driver "docker" against <nil>
	I0817 21:42:27.467457  190867 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0817 21:42:27.469833  190867 out.go:177] 
	W0817 21:42:27.471389  190867 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0817 21:42:27.473029  190867 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-405473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-762508
contexts:
- context:
cluster: pause-762508
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-762508
name: pause-762508
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-762508
user:
client-certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.crt
client-key: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-405473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-405473"

                                                
                                                
----------------------- debugLogs end: false-405473 [took: 3.215658722s] --------------------------------
helpers_test.go:175: Cleaning up "false-405473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-405473
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762508 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-762508 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.868186928s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-762508 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-762508 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-762508 --output=json --layout=cluster: exit status 2 (302.084598ms)

                                                
                                                
-- stdout --
	{"Name":"pause-762508","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-762508","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-762508 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-762508 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.66s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-762508 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-762508 --alsologtostderr -v=5: (2.662741284s)
--- PASS: TestPause/serial/DeletePaused (2.66s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.93s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.878220055s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-762508
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-762508: exit status 1 (16.209835ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-762508: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-784624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-784624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m8.707837245s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-803413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0817 21:44:10.366844   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-803413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (1m13.481613068s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-803413 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [78dfd434-cdfa-43fd-b42a-4e76086a0571] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [78dfd434-cdfa-43fd-b42a-4e76086a0571] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.01707956s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-803413 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-803413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-803413 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-803413 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-803413 --alsologtostderr -v=3: (11.888294422s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-803413 -n no-preload-803413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-803413 -n no-preload-803413: exit status 7 (59.277604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-803413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-803413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-803413 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (5m35.420768968s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-803413 -n no-preload-803413
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-784624 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [055cb6a5-a1a1-4e0a-8c56-443ba6e8d302] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [055cb6a5-a1a1-4e0a-8c56-443ba6e8d302] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.015259554s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-784624 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-784624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-784624 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-784624 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-784624 --alsologtostderr -v=3: (11.924662872s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-784624 -n old-k8s-version-784624
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-784624 -n old-k8s-version-784624: exit status 7 (60.990391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-784624 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (418.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-784624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-784624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m58.4851737s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-784624 -n old-k8s-version-784624
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (418.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-639520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0817 21:46:31.781964   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-639520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m8.835279193s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136615 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136615 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (37.221579515s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-639520 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [89662d68-297e-4d27-9b2e-9f003aece90b] Pending
helpers_test.go:344: "busybox" [89662d68-297e-4d27-9b2e-9f003aece90b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [89662d68-297e-4d27-9b2e-9f003aece90b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.097406905s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-639520 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-639520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-639520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006380827s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-639520 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-639520 --alsologtostderr -v=3
E0817 21:47:53.380317   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-639520 --alsologtostderr -v=3: (11.960722733s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-639520 -n embed-certs-639520
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-639520 -n embed-certs-639520: exit status 7 (63.248886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-639520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-639520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-639520 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m35.644994932s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-639520 -n embed-certs-639520
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-136615 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0025d75e-b2cf-4b4e-aa1b-6f6fad76e714] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0025d75e-b2cf-4b4e-aa1b-6f6fad76e714] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.015248305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-136615 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-136615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-136615 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-136615 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-136615 --alsologtostderr -v=3: (11.945981012s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615: exit status 7 (88.630892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-136615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136615 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0817 21:49:10.367149   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136615 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m47.123461245s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvbjz" [d9124376-dfb6-40a3-a40f-388832f8cc34] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvbjz" [d9124376-dfb6-40a3-a40f-388832f8cc34] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.016296979s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvbjz" [d9124376-dfb6-40a3-a40f-388832f8cc34] Running
E0817 21:50:56.425075   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008791947s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-803413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-803413 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-803413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-803413 -n no-preload-803413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-803413 -n no-preload-803413: exit status 2 (278.973599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-803413 -n no-preload-803413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-803413 -n no-preload-803413: exit status 2 (281.690169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-803413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-803413 -n no-preload-803413
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-803413 -n no-preload-803413
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734671 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0817 21:51:31.781470   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/ingress-addon-legacy-997484/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734671 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (38.195116684s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-734671 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-734671 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-734671 --alsologtostderr -v=3: (1.215606429s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734671 -n newest-cni-734671
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734671 -n newest-cni-734671: exit status 7 (66.508392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-734671 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734671 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734671 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (26.022376092s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734671 -n newest-cni-734671
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-734671 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-734671 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734671 -n newest-cni-734671
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734671 -n newest-cni-734671: exit status 2 (325.363284ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734671 -n newest-cni-734671
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734671 -n newest-cni-734671: exit status 2 (282.219292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-734671 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734671 -n newest-cni-734671
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734671 -n newest-cni-734671
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.067236454s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lbw2w" [93ac2995-4bdb-4ff0-9a4a-14b4bed36d00] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014121201s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lbw2w" [93ac2995-4bdb-4ff0-9a4a-14b4bed36d00] Running
E0817 21:52:53.380166   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/addons-418182/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008460144s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-784624 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-784624 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-784624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-784624 -n old-k8s-version-784624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-784624 -n old-k8s-version-784624: exit status 2 (285.72591ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-784624 -n old-k8s-version-784624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-784624 -n old-k8s-version-784624: exit status 2 (277.289323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-784624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-784624 -n old-k8s-version-784624
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-784624 -n old-k8s-version-784624
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.065351663s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-w4vln" [7dac24ce-726a-436d-b833-cc3716913175] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-w4vln" [7dac24ce-726a-436d-b833-cc3716913175] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.008694839s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q4c6f" [16cf4444-2e73-4ab2-9289-5230a6060968] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q4c6f" [16cf4444-2e73-4ab2-9289-5230a6060968] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.021496997s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-q4c6f" [16cf4444-2e73-4ab2-9289-5230a6060968] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.035737292s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-639520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.148973149s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-639520 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-639520 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-639520 -n embed-certs-639520
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-639520 -n embed-certs-639520: exit status 2 (373.799673ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-639520 -n embed-certs-639520
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-639520 -n embed-certs-639520: exit status 2 (356.988686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-639520 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-639520 -n embed-certs-639520
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-639520 -n embed-certs-639520
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0817 21:54:10.366428   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/functional-702251/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.02573465s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wpwzf" [a1fba514-77bd-4606-8e8a-320a8f6839e1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021273588s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k4w2t" [2250336e-dac1-429b-936b-b5220886392d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-k4w2t" [2250336e-dac1-429b-936b-b5220886392d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010815734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-c8ckk" [c779f815-83a3-4596-94c2-2fcd456b6251] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021048165s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-c8ckk" [c779f815-83a3-4596-94c2-2fcd456b6251] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011338025s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-136615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-136615 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-136615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615: exit status 2 (308.237931ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615: exit status 2 (285.848299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-136615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136615 -n default-k8s-diff-port-136615
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.81s)
E0817 21:55:46.881413   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/old-k8s-version-784624/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0817 21:54:44.218722   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.224066   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.234364   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.254696   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.295380   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.375716   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.536262   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:44.857173   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:45.498280   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
E0817 21:54:46.779119   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.160181261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0817 21:54:54.460315   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.674457992s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nqz5p" [4018ebd1-b5cd-4dc0-99aa-33fbf45f1246] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.022439821s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fjt4b" [4cd1eaea-bf8a-4882-94ad-51eb988a7e9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0817 21:55:04.701240   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/no-preload-803413/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-fjt4b" [4cd1eaea-bf8a-4882-94ad-51eb988a7e9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008862608s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wh4rs" [93a78953-41bc-4ca7-89a7-3bf55cef491f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wh4rs" [93a78953-41bc-4ca7-89a7-3bf55cef491f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.008983839s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (34.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-405473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (34.552360275s)
--- PASS: TestNetworkPlugins/group/bridge/Start (34.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4gt2m" [c19ff9c7-02a6-40ca-a012-67d1d40011c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017714046s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-tkscm" [f1856feb-fae4-410f-add5-713912a50de7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-tkscm" [f1856feb-fae4-410f-add5-713912a50de7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.010195675s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mqcvr" [dfd387e9-03a4-420c-abe8-1a5e90eaad24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mqcvr" [dfd387e9-03a4-420c-abe8-1a5e90eaad24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.008235967s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-405473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-405473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zggn2" [be44009b-f631-4479-9902-0ca7d3575aea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0817 21:56:07.361956   17504 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/old-k8s-version-784624/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-zggn2" [be44009b-f631-4479-9902-0ca7d3575aea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008830436s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-405473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-405473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (27/310)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-416032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-416032
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-405473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-762508
contexts:
- context:
cluster: pause-762508
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-762508
name: pause-762508
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-762508
user:
client-certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.crt
client-key: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-405473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-405473"

                                                
                                                
----------------------- debugLogs end: kubenet-405473 [took: 3.467878654s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-405473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-405473
--- SKIP: TestNetworkPlugins/group/kubenet (3.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-405473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-405473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:42:32 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-592923
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:42:29 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-109707
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16865-10716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-762508
contexts:
- context:
cluster: force-systemd-env-592923
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:42:32 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-env-592923
name: force-systemd-env-592923
- context:
cluster: kubernetes-upgrade-109707
user: kubernetes-upgrade-109707
name: kubernetes-upgrade-109707
- context:
cluster: pause-762508
extensions:
- extension:
last-update: Thu, 17 Aug 2023 21:41:54 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-762508
name: pause-762508
current-context: force-systemd-env-592923
kind: Config
preferences: {}
users:
- name: force-systemd-env-592923
user:
client-certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/force-systemd-env-592923/client.crt
client-key: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/force-systemd-env-592923/client.key
- name: kubernetes-upgrade-109707
user:
client-certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/kubernetes-upgrade-109707/client.crt
client-key: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/kubernetes-upgrade-109707/client.key
- name: pause-762508
user:
client-certificate: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.crt
client-key: /home/jenkins/minikube-integration/16865-10716/.minikube/profiles/pause-762508/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-405473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-405473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-405473"

                                                
                                                
----------------------- debugLogs end: cilium-405473 [took: 4.224328318s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-405473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-405473
--- SKIP: TestNetworkPlugins/group/cilium (4.54s)

                                                
                                    
Copied to clipboard