Test Report: Docker_Linux_crio 16968

                    
                      3b33420a0c9ae0948b181bc91d502671e4007a23:2023-07-31:30376
                    
                

Test fail (6/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 151.22
137 TestFunctional/parallel/MountCmd/specific-port 12.3
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.73
204 TestMultiNode/serial/PingHostFrom2Pods 3.06
225 TestRunningBinaryUpgrade 71.06
235 TestStoppedBinaryUpgrade/Upgrade 80.4
x
+
TestAddons/parallel/Ingress (151.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-650980 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-650980 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-650980 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b9d17dfc-86b3-4cf1-ac34-15fc62de0519] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b9d17dfc-86b3-4cf1-ac34-15fc62de0519] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009200269s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-650980 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.015208722s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-650980 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-650980 addons disable ingress --alsologtostderr -v=1: (7.676777496s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-650980
helpers_test.go:235: (dbg) docker inspect addons-650980:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158",
	        "Created": "2023-07-31T10:56:10.488413587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T10:56:10.798274343Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158/hostname",
	        "HostsPath": "/var/lib/docker/containers/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158/hosts",
	        "LogPath": "/var/lib/docker/containers/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158-json.log",
	        "Name": "/addons-650980",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-650980:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-650980",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfe79586e4f00a6d1ee322c45a1bbe3164088768799ace6e1b3b1ca59149b68b-init/diff:/var/lib/docker/overlay2/024d10bc12a315dda5382be7dcc437728fbe4eb773f76ea4124e9f17d757e8de/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfe79586e4f00a6d1ee322c45a1bbe3164088768799ace6e1b3b1ca59149b68b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfe79586e4f00a6d1ee322c45a1bbe3164088768799ace6e1b3b1ca59149b68b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfe79586e4f00a6d1ee322c45a1bbe3164088768799ace6e1b3b1ca59149b68b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-650980",
	                "Source": "/var/lib/docker/volumes/addons-650980/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-650980",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-650980",
	                "name.minikube.sigs.k8s.io": "addons-650980",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac7c6c6be7478d4190cb37443de08f19ef29f3cfacad0c97ede049369ffca6b0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ac7c6c6be747",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-650980": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8dadec99eb04",
	                        "addons-650980"
	                    ],
	                    "NetworkID": "7906dd548fb85b8ba8be62620501e774ed3df4825bd75825864dbd04273a3470",
	                    "EndpointID": "40a8ebf79a44e10a0ff8ce203ed0ddc04650b67ddd11cb7efb5cf3886050ec0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-650980 -n addons-650980
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-650980 logs -n 25: (1.113873389s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-763731   | jenkins | v1.31.1 | 31 Jul 23 10:54 UTC |                     |
	|         | -p download-only-763731        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-763731   | jenkins | v1.31.1 | 31 Jul 23 10:54 UTC |                     |
	|         | -p download-only-763731        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:55 UTC |
	| delete  | -p download-only-763731        | download-only-763731   | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:55 UTC |
	| delete  | -p download-only-763731        | download-only-763731   | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:55 UTC |
	| start   | --download-only -p             | download-docker-811246 | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC |                     |
	|         | download-docker-811246         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-811246      | download-docker-811246 | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:55 UTC |
	| start   | --download-only -p             | binary-mirror-049634   | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC |                     |
	|         | binary-mirror-049634           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46319         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-049634        | binary-mirror-049634   | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:55 UTC |
	| start   | -p addons-650980               | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:55 UTC | 31 Jul 23 10:57 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:57 UTC | 31 Jul 23 10:57 UTC |
	|         | addons-650980                  |                        |         |         |                     |                     |
	| addons  | addons-650980 addons           | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:57 UTC | 31 Jul 23 10:57 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:57 UTC | 31 Jul 23 10:57 UTC |
	|         | -p addons-650980               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-650980 addons disable   | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:57 UTC | 31 Jul 23 10:57 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ip      | addons-650980 ip               | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:58 UTC | 31 Jul 23 10:58 UTC |
	| addons  | addons-650980 addons disable   | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:58 UTC | 31 Jul 23 10:58 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:58 UTC | 31 Jul 23 10:58 UTC |
	|         | addons-650980                  |                        |         |         |                     |                     |
	| ssh     | addons-650980 ssh curl -s      | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-650980 addons           | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:59 UTC | 31 Jul 23 10:59 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-650980 addons           | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 10:59 UTC | 31 Jul 23 10:59 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-650980 ip               | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 11:00 UTC | 31 Jul 23 11:00 UTC |
	| addons  | addons-650980 addons disable   | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 11:00 UTC | 31 Jul 23 11:00 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-650980 addons disable   | addons-650980          | jenkins | v1.31.1 | 31 Jul 23 11:00 UTC | 31 Jul 23 11:00 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:55:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:55:50.230658   16633 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:55:50.230822   16633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:55:50.230831   16633 out.go:309] Setting ErrFile to fd 2...
	I0731 10:55:50.230836   16633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:55:50.231018   16633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 10:55:50.231658   16633 out.go:303] Setting JSON to false
	I0731 10:55:50.232484   16633 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2301,"bootTime":1690798649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 10:55:50.232543   16633 start.go:138] virtualization: kvm guest
	I0731 10:55:50.234790   16633 out.go:177] * [addons-650980] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 10:55:50.236210   16633 notify.go:220] Checking for updates...
	I0731 10:55:50.236212   16633 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 10:55:50.238101   16633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:55:50.239652   16633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 10:55:50.241215   16633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 10:55:50.242657   16633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 10:55:50.244141   16633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 10:55:50.245808   16633 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:55:50.266728   16633 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:55:50.266796   16633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:55:50.313794   16633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-07-31 10:55:50.306273451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:55:50.313916   16633 docker.go:294] overlay module found
	I0731 10:55:50.315563   16633 out.go:177] * Using the docker driver based on user configuration
	I0731 10:55:50.316900   16633 start.go:298] selected driver: docker
	I0731 10:55:50.316910   16633 start.go:898] validating driver "docker" against <nil>
	I0731 10:55:50.316919   16633 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 10:55:50.317611   16633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:55:50.367260   16633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-07-31 10:55:50.359815697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:55:50.367395   16633 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 10:55:50.367567   16633 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 10:55:50.369062   16633 out.go:177] * Using Docker driver with root privileges
	I0731 10:55:50.370498   16633 cni.go:84] Creating CNI manager for ""
	I0731 10:55:50.370513   16633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 10:55:50.370526   16633 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:55:50.370537   16633 start_flags.go:319] config:
	{Name:addons-650980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-650980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:55:50.372094   16633 out.go:177] * Starting control plane node addons-650980 in cluster addons-650980
	I0731 10:55:50.373286   16633 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 10:55:50.374561   16633 out.go:177] * Pulling base image ...
	I0731 10:55:50.375790   16633 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 10:55:50.375818   16633 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0731 10:55:50.375824   16633 cache.go:57] Caching tarball of preloaded images
	I0731 10:55:50.375875   16633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:55:50.375908   16633 preload.go:174] Found /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 10:55:50.375918   16633 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 10:55:50.376236   16633 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/config.json ...
	I0731 10:55:50.376259   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/config.json: {Name:mk4db39a41a660c1f00bb7b5795d420e86591f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:55:50.390684   16633 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:55:50.390782   16633 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:55:50.390799   16633 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 10:55:50.390805   16633 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 10:55:50.390816   16633 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 10:55:50.390826   16633 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0731 10:56:01.382041   16633 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0731 10:56:01.382071   16633 cache.go:195] Successfully downloaded all kic artifacts
	I0731 10:56:01.382105   16633 start.go:365] acquiring machines lock for addons-650980: {Name:mk4038e31ea6a35966e2a31c923da05d0d7ba2fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 10:56:01.382202   16633 start.go:369] acquired machines lock for "addons-650980" in 71.781µs
	I0731 10:56:01.382223   16633 start.go:93] Provisioning new machine with config: &{Name:addons-650980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-650980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 10:56:01.382305   16633 start.go:125] createHost starting for "" (driver="docker")
	I0731 10:56:01.384178   16633 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0731 10:56:01.384374   16633 start.go:159] libmachine.API.Create for "addons-650980" (driver="docker")
	I0731 10:56:01.384402   16633 client.go:168] LocalClient.Create starting
	I0731 10:56:01.384488   16633 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem
	I0731 10:56:01.691223   16633 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem
	I0731 10:56:01.751224   16633 cli_runner.go:164] Run: docker network inspect addons-650980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 10:56:01.767192   16633 cli_runner.go:211] docker network inspect addons-650980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 10:56:01.767278   16633 network_create.go:281] running [docker network inspect addons-650980] to gather additional debugging logs...
	I0731 10:56:01.767297   16633 cli_runner.go:164] Run: docker network inspect addons-650980
	W0731 10:56:01.781941   16633 cli_runner.go:211] docker network inspect addons-650980 returned with exit code 1
	I0731 10:56:01.781975   16633 network_create.go:284] error running [docker network inspect addons-650980]: docker network inspect addons-650980: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-650980 not found
	I0731 10:56:01.781987   16633 network_create.go:286] output of [docker network inspect addons-650980]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-650980 not found
	
	** /stderr **
	I0731 10:56:01.782046   16633 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:56:01.797709   16633 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005bc7e0}
	I0731 10:56:01.797747   16633 network_create.go:123] attempt to create docker network addons-650980 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 10:56:01.797809   16633 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-650980 addons-650980
	I0731 10:56:01.848910   16633 network_create.go:107] docker network addons-650980 192.168.49.0/24 created
	I0731 10:56:01.848952   16633 kic.go:117] calculated static IP "192.168.49.2" for the "addons-650980" container
	I0731 10:56:01.849050   16633 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 10:56:01.864386   16633 cli_runner.go:164] Run: docker volume create addons-650980 --label name.minikube.sigs.k8s.io=addons-650980 --label created_by.minikube.sigs.k8s.io=true
	I0731 10:56:01.880519   16633 oci.go:103] Successfully created a docker volume addons-650980
	I0731 10:56:01.880597   16633 cli_runner.go:164] Run: docker run --rm --name addons-650980-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-650980 --entrypoint /usr/bin/test -v addons-650980:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 10:56:05.535979   16633 cli_runner.go:217] Completed: docker run --rm --name addons-650980-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-650980 --entrypoint /usr/bin/test -v addons-650980:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (3.655333271s)
	I0731 10:56:05.536009   16633 oci.go:107] Successfully prepared a docker volume addons-650980
	I0731 10:56:05.536024   16633 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 10:56:05.536044   16633 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 10:56:05.536096   16633 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-650980:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 10:56:10.424165   16633 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-650980:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.887999953s)
	I0731 10:56:10.424194   16633 kic.go:199] duration metric: took 4.888148 seconds to extract preloaded images to volume
	W0731 10:56:10.424315   16633 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 10:56:10.424422   16633 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 10:56:10.474411   16633 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-650980 --name addons-650980 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-650980 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-650980 --network addons-650980 --ip 192.168.49.2 --volume addons-650980:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 10:56:10.806016   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Running}}
	I0731 10:56:10.823823   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:10.842084   16633 cli_runner.go:164] Run: docker exec addons-650980 stat /var/lib/dpkg/alternatives/iptables
	I0731 10:56:10.909482   16633 oci.go:144] the created container "addons-650980" has a running status.
	I0731 10:56:10.909512   16633 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa...
	I0731 10:56:11.110395   16633 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 10:56:11.130145   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:11.151778   16633 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 10:56:11.151800   16633 kic_runner.go:114] Args: [docker exec --privileged addons-650980 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 10:56:11.268649   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:11.290070   16633 machine.go:88] provisioning docker machine ...
	I0731 10:56:11.290111   16633 ubuntu.go:169] provisioning hostname "addons-650980"
	I0731 10:56:11.290177   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:11.309007   16633 main.go:141] libmachine: Using SSH client type: native
	I0731 10:56:11.309421   16633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0731 10:56:11.309436   16633 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-650980 && echo "addons-650980" | sudo tee /etc/hostname
	I0731 10:56:11.501229   16633 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-650980
	
	I0731 10:56:11.501303   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:11.518175   16633 main.go:141] libmachine: Using SSH client type: native
	I0731 10:56:11.518571   16633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0731 10:56:11.518597   16633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-650980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-650980/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-650980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 10:56:11.643662   16633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 10:56:11.643687   16633 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 10:56:11.643711   16633 ubuntu.go:177] setting up certificates
	I0731 10:56:11.643722   16633 provision.go:83] configureAuth start
	I0731 10:56:11.643773   16633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-650980
	I0731 10:56:11.659753   16633 provision.go:138] copyHostCerts
	I0731 10:56:11.659833   16633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 10:56:11.659966   16633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 10:56:11.660026   16633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 10:56:11.660069   16633 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.addons-650980 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-650980]
	I0731 10:56:11.905252   16633 provision.go:172] copyRemoteCerts
	I0731 10:56:11.905301   16633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 10:56:11.905336   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:11.921314   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:12.011966   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 10:56:12.032679   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 10:56:12.052710   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 10:56:12.072132   16633 provision.go:86] duration metric: configureAuth took 428.399352ms
	I0731 10:56:12.072153   16633 ubuntu.go:193] setting minikube options for container-runtime
	I0731 10:56:12.072296   16633 config.go:182] Loaded profile config "addons-650980": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 10:56:12.072389   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:12.087924   16633 main.go:141] libmachine: Using SSH client type: native
	I0731 10:56:12.088335   16633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0731 10:56:12.088353   16633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 10:56:12.295484   16633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 10:56:12.295513   16633 machine.go:91] provisioned docker machine in 1.005418725s
	I0731 10:56:12.295524   16633 client.go:171] LocalClient.Create took 10.911115572s
	I0731 10:56:12.295550   16633 start.go:167] duration metric: libmachine.API.Create for "addons-650980" took 10.911173491s
	I0731 10:56:12.295566   16633 start.go:300] post-start starting for "addons-650980" (driver="docker")
	I0731 10:56:12.295583   16633 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 10:56:12.295693   16633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 10:56:12.295756   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:12.312521   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:12.407980   16633 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 10:56:12.410847   16633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 10:56:12.410888   16633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 10:56:12.410903   16633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 10:56:12.410910   16633 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 10:56:12.410920   16633 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 10:56:12.410990   16633 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 10:56:12.411024   16633 start.go:303] post-start completed in 115.446839ms
	I0731 10:56:12.411302   16633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-650980
	I0731 10:56:12.426549   16633 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/config.json ...
	I0731 10:56:12.426773   16633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 10:56:12.426809   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:12.441649   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:12.528346   16633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 10:56:12.532091   16633 start.go:128] duration metric: createHost completed in 11.14977362s
	I0731 10:56:12.532116   16633 start.go:83] releasing machines lock for "addons-650980", held for 11.149903498s
	I0731 10:56:12.532165   16633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-650980
	I0731 10:56:12.547138   16633 ssh_runner.go:195] Run: cat /version.json
	I0731 10:56:12.547178   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:12.547222   16633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 10:56:12.547276   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:12.565777   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:12.567220   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:12.739802   16633 ssh_runner.go:195] Run: systemctl --version
	I0731 10:56:12.743976   16633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 10:56:12.878461   16633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 10:56:12.882544   16633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 10:56:12.899496   16633 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 10:56:12.899568   16633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 10:56:12.924926   16633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 10:56:12.924945   16633 start.go:466] detecting cgroup driver to use...
	I0731 10:56:12.924972   16633 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 10:56:12.925021   16633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 10:56:12.938124   16633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 10:56:12.947909   16633 docker.go:196] disabling cri-docker service (if available) ...
	I0731 10:56:12.947964   16633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 10:56:12.959795   16633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 10:56:12.971848   16633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 10:56:13.047955   16633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 10:56:13.131444   16633 docker.go:212] disabling docker service ...
	I0731 10:56:13.131493   16633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 10:56:13.147870   16633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 10:56:13.158024   16633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 10:56:13.235441   16633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 10:56:13.319325   16633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 10:56:13.329360   16633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 10:56:13.343005   16633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 10:56:13.343064   16633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 10:56:13.351233   16633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 10:56:13.351288   16633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 10:56:13.359956   16633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 10:56:13.368716   16633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 10:56:13.377044   16633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 10:56:13.384628   16633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 10:56:13.391595   16633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 10:56:13.398795   16633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 10:56:13.471090   16633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 10:56:13.584275   16633 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 10:56:13.584348   16633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 10:56:13.587703   16633 start.go:534] Will wait 60s for crictl version
	I0731 10:56:13.587754   16633 ssh_runner.go:195] Run: which crictl
	I0731 10:56:13.590744   16633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 10:56:13.622844   16633 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 10:56:13.622940   16633 ssh_runner.go:195] Run: crio --version
	I0731 10:56:13.656491   16633 ssh_runner.go:195] Run: crio --version
	I0731 10:56:13.689748   16633 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 10:56:13.691115   16633 cli_runner.go:164] Run: docker network inspect addons-650980 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 10:56:13.707304   16633 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 10:56:13.710912   16633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:56:13.720507   16633 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 10:56:13.720557   16633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:56:13.767589   16633 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 10:56:13.767620   16633 crio.go:415] Images already preloaded, skipping extraction
	I0731 10:56:13.767660   16633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 10:56:13.798099   16633 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 10:56:13.798119   16633 cache_images.go:84] Images are preloaded, skipping loading
	I0731 10:56:13.798174   16633 ssh_runner.go:195] Run: crio config
	I0731 10:56:13.836513   16633 cni.go:84] Creating CNI manager for ""
	I0731 10:56:13.836531   16633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 10:56:13.836544   16633 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 10:56:13.836574   16633 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-650980 NodeName:addons-650980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 10:56:13.836756   16633 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-650980"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 10:56:13.836836   16633 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-650980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-650980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 10:56:13.836889   16633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 10:56:13.845377   16633 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 10:56:13.845430   16633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 10:56:13.852822   16633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0731 10:56:13.867614   16633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 10:56:13.883090   16633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0731 10:56:13.898915   16633 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 10:56:13.901913   16633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 10:56:13.911694   16633 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980 for IP: 192.168.49.2
	I0731 10:56:13.911722   16633 certs.go:190] acquiring lock for shared ca certs: {Name:mkc3a3f248dbae88fa439f539f826d6e08b37eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:13.911844   16633 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key
	I0731 10:56:14.058469   16633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt ...
	I0731 10:56:14.058500   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt: {Name:mk253854807a1318c486182706bff753833d9e2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.058663   16633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key ...
	I0731 10:56:14.058673   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key: {Name:mk55f1387d71b59c128b2c67f3decc46a82ac699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.058749   16633 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key
	I0731 10:56:14.136115   16633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt ...
	I0731 10:56:14.136143   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt: {Name:mk6800f66210806084148f8a82b0253bd4bfc31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.136291   16633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key ...
	I0731 10:56:14.136300   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key: {Name:mk496d3a3eb35ba39d8a8cfb069306024e33d3e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.136393   16633 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.key
	I0731 10:56:14.136407   16633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt with IP's: []
	I0731 10:56:14.366795   16633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt ...
	I0731 10:56:14.366824   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: {Name:mk7028f5ff11b02836b003a0c99db40238eb351d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.366976   16633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.key ...
	I0731 10:56:14.366986   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.key: {Name:mk5d0a0121ee187af686ac4dd469c035dfb4ab62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.367050   16633 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key.dd3b5fb2
	I0731 10:56:14.367066   16633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 10:56:14.605663   16633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt.dd3b5fb2 ...
	I0731 10:56:14.605691   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt.dd3b5fb2: {Name:mk84ed3dbaa9530780120029e3f9af026a291e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.605832   16633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key.dd3b5fb2 ...
	I0731 10:56:14.605843   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key.dd3b5fb2: {Name:mk621d6553b9f78103d21245dc2ea5580fd76c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.605905   16633 certs.go:337] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt
	I0731 10:56:14.605967   16633 certs.go:341] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key
	I0731 10:56:14.606008   16633 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.key
	I0731 10:56:14.606020   16633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.crt with IP's: []
	I0731 10:56:14.751511   16633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.crt ...
	I0731 10:56:14.751540   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.crt: {Name:mk016a85f0f4334e3770d0b58be56162567ac3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.751698   16633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.key ...
	I0731 10:56:14.751708   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.key: {Name:mk149b53e7e6cfc7486ea55569788cc3c2281418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:14.751860   16633 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 10:56:14.751922   16633 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem (1082 bytes)
	I0731 10:56:14.751959   16633 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem (1123 bytes)
	I0731 10:56:14.751986   16633 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem (1675 bytes)
	I0731 10:56:14.752561   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 10:56:14.773866   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 10:56:14.794362   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 10:56:14.814749   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 10:56:14.834799   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 10:56:14.854709   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 10:56:14.874613   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 10:56:14.894779   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 10:56:14.914886   16633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 10:56:14.935077   16633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 10:56:14.950117   16633 ssh_runner.go:195] Run: openssl version
	I0731 10:56:14.955066   16633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 10:56:14.963331   16633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:56:14.966482   16633 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:56:14.966533   16633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 10:56:14.972567   16633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 10:56:14.980513   16633 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 10:56:14.983384   16633 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 10:56:14.983438   16633 kubeadm.go:404] StartCluster: {Name:addons-650980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-650980 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:56:14.983531   16633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 10:56:14.983579   16633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 10:56:15.015594   16633 cri.go:89] found id: ""
	I0731 10:56:15.015661   16633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 10:56:15.023325   16633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 10:56:15.030922   16633 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 10:56:15.030974   16633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 10:56:15.038595   16633 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 10:56:15.038631   16633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 10:56:15.081972   16633 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 10:56:15.082039   16633 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 10:56:15.115386   16633 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 10:56:15.115479   16633 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0731 10:56:15.115536   16633 kubeadm.go:322] OS: Linux
	I0731 10:56:15.115599   16633 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 10:56:15.115683   16633 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 10:56:15.115750   16633 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 10:56:15.115823   16633 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 10:56:15.115912   16633 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 10:56:15.115993   16633 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 10:56:15.116054   16633 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0731 10:56:15.116147   16633 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0731 10:56:15.116227   16633 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0731 10:56:15.177439   16633 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 10:56:15.177574   16633 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 10:56:15.177704   16633 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 10:56:15.363091   16633 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 10:56:15.364869   16633 out.go:204]   - Generating certificates and keys ...
	I0731 10:56:15.364998   16633 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 10:56:15.365089   16633 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 10:56:15.432832   16633 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 10:56:15.533348   16633 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 10:56:15.740384   16633 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 10:56:16.135502   16633 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 10:56:16.277022   16633 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 10:56:16.277173   16633 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-650980 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:56:16.345131   16633 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 10:56:16.345270   16633 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-650980 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 10:56:16.468160   16633 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 10:56:16.673582   16633 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 10:56:16.866785   16633 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 10:56:16.866856   16633 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 10:56:16.959474   16633 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 10:56:17.125846   16633 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 10:56:17.260111   16633 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 10:56:17.424693   16633 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 10:56:17.432764   16633 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 10:56:17.433586   16633 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 10:56:17.433636   16633 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 10:56:17.516418   16633 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 10:56:17.518505   16633 out.go:204]   - Booting up control plane ...
	I0731 10:56:17.518654   16633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 10:56:17.519276   16633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 10:56:17.521313   16633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 10:56:17.522240   16633 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 10:56:17.525028   16633 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 10:56:22.526942   16633 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001826 seconds
	I0731 10:56:22.527144   16633 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 10:56:22.538901   16633 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 10:56:23.055986   16633 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 10:56:23.056238   16633 kubeadm.go:322] [mark-control-plane] Marking the node addons-650980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 10:56:23.566298   16633 kubeadm.go:322] [bootstrap-token] Using token: oaigd5.6utwpcgy6b5uwr8k
	I0731 10:56:23.567837   16633 out.go:204]   - Configuring RBAC rules ...
	I0731 10:56:23.567977   16633 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 10:56:23.571529   16633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 10:56:23.577350   16633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 10:56:23.581284   16633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 10:56:23.583945   16633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 10:56:23.586498   16633 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 10:56:23.596420   16633 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 10:56:23.812857   16633 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 10:56:23.974805   16633 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 10:56:23.975587   16633 kubeadm.go:322] 
	I0731 10:56:23.975687   16633 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 10:56:23.975702   16633 kubeadm.go:322] 
	I0731 10:56:23.975774   16633 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 10:56:23.975785   16633 kubeadm.go:322] 
	I0731 10:56:23.975821   16633 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 10:56:23.975916   16633 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 10:56:23.975997   16633 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 10:56:23.976006   16633 kubeadm.go:322] 
	I0731 10:56:23.976088   16633 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 10:56:23.976107   16633 kubeadm.go:322] 
	I0731 10:56:23.976172   16633 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 10:56:23.976181   16633 kubeadm.go:322] 
	I0731 10:56:23.976259   16633 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 10:56:23.976352   16633 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 10:56:23.976418   16633 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 10:56:23.976424   16633 kubeadm.go:322] 
	I0731 10:56:23.976505   16633 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 10:56:23.976583   16633 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 10:56:23.976593   16633 kubeadm.go:322] 
	I0731 10:56:23.976695   16633 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oaigd5.6utwpcgy6b5uwr8k \
	I0731 10:56:23.976822   16633 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd \
	I0731 10:56:23.976850   16633 kubeadm.go:322] 	--control-plane 
	I0731 10:56:23.976859   16633 kubeadm.go:322] 
	I0731 10:56:23.976979   16633 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 10:56:23.976991   16633 kubeadm.go:322] 
	I0731 10:56:23.977102   16633 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oaigd5.6utwpcgy6b5uwr8k \
	I0731 10:56:23.977248   16633 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd 
	I0731 10:56:23.978824   16633 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0731 10:56:23.978988   16633 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 10:56:23.979022   16633 cni.go:84] Creating CNI manager for ""
	I0731 10:56:23.979031   16633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 10:56:23.981662   16633 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 10:56:23.983121   16633 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 10:56:23.987645   16633 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 10:56:23.987666   16633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 10:56:24.044551   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 10:56:24.666628   16633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 10:56:24.666707   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:24.666710   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=addons-650980 minikube.k8s.io/updated_at=2023_07_31T10_56_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:24.754560   16633 ops.go:34] apiserver oom_adj: -16
	I0731 10:56:24.754675   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:24.814647   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:25.375747   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:25.875753   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:26.375277   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:26.875962   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:27.375736   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:27.875921   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:28.375833   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:28.875713   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:29.375327   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:29.875870   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:30.375733   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:30.875869   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:31.375842   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:31.875870   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:32.375904   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:32.875603   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:33.374995   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:33.875732   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:34.375816   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:34.875943   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:35.375331   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:35.876012   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:36.375011   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:36.875819   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:37.375949   16633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 10:56:37.496436   16633 kubeadm.go:1081] duration metric: took 12.829788021s to wait for elevateKubeSystemPrivileges.
	I0731 10:56:37.496472   16633 kubeadm.go:406] StartCluster complete in 22.513039589s
	I0731 10:56:37.496492   16633 settings.go:142] acquiring lock: {Name:mk56cd859b72e4589e0c5d99bc981c97b4dc2ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:37.496606   16633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 10:56:37.497001   16633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/kubeconfig: {Name:mk53977df3b191de084093522567bbafd77b3df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 10:56:37.497200   16633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 10:56:37.497263   16633 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0731 10:56:37.497368   16633 addons.go:69] Setting volumesnapshots=true in profile "addons-650980"
	I0731 10:56:37.497383   16633 addons.go:69] Setting cloud-spanner=true in profile "addons-650980"
	I0731 10:56:37.497393   16633 addons.go:231] Setting addon volumesnapshots=true in "addons-650980"
	I0731 10:56:37.497399   16633 addons.go:231] Setting addon cloud-spanner=true in "addons-650980"
	I0731 10:56:37.497406   16633 config.go:182] Loaded profile config "addons-650980": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 10:56:37.497437   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497433   16633 addons.go:69] Setting ingress-dns=true in profile "addons-650980"
	I0731 10:56:37.497446   16633 addons.go:69] Setting inspektor-gadget=true in profile "addons-650980"
	I0731 10:56:37.497452   16633 addons.go:69] Setting helm-tiller=true in profile "addons-650980"
	I0731 10:56:37.497439   16633 addons.go:69] Setting registry=true in profile "addons-650980"
	I0731 10:56:37.497459   16633 addons.go:231] Setting addon ingress-dns=true in "addons-650980"
	I0731 10:56:37.497464   16633 addons.go:231] Setting addon helm-tiller=true in "addons-650980"
	I0731 10:56:37.497463   16633 addons.go:69] Setting storage-provisioner=true in profile "addons-650980"
	I0731 10:56:37.497480   16633 addons.go:231] Setting addon registry=true in "addons-650980"
	I0731 10:56:37.497483   16633 addons.go:69] Setting metrics-server=true in profile "addons-650980"
	I0731 10:56:37.497490   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497494   16633 addons.go:231] Setting addon storage-provisioner=true in "addons-650980"
	I0731 10:56:37.497506   16633 addons.go:231] Setting addon metrics-server=true in "addons-650980"
	I0731 10:56:37.497524   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497525   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497535   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497539   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497433   16633 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-650980"
	I0731 10:56:37.497585   16633 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-650980"
	I0731 10:56:37.497628   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497460   16633 addons.go:231] Setting addon inspektor-gadget=true in "addons-650980"
	I0731 10:56:37.497723   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497963   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.497969   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.497979   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.497991   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498011   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498019   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498023   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.497373   16633 addons.go:69] Setting default-storageclass=true in profile "addons-650980"
	I0731 10:56:37.498141   16633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-650980"
	I0731 10:56:37.498145   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.497459   16633 addons.go:69] Setting gcp-auth=true in profile "addons-650980"
	I0731 10:56:37.498190   16633 mustload.go:65] Loading cluster: addons-650980
	I0731 10:56:37.497469   16633 addons.go:69] Setting ingress=true in profile "addons-650980"
	I0731 10:56:37.498233   16633 addons.go:231] Setting addon ingress=true in "addons-650980"
	I0731 10:56:37.498276   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.497440   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.498376   16633 config.go:182] Loaded profile config "addons-650980": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 10:56:37.498393   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498617   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498682   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.498711   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.533573   16633 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 10:56:37.539991   16633 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:56:37.540013   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 10:56:37.540065   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.542207   16633 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 10:56:37.543552   16633 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0731 10:56:37.545012   16633 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 10:56:37.545029   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 10:56:37.545079   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.545178   16633 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0731 10:56:37.546504   16633 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 10:56:37.546520   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 10:56:37.546565   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.545226   16633 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0731 10:56:37.545301   16633 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 10:56:37.550196   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 10:56:37.549002   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 10:56:37.555232   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 10:56:37.552453   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 10:56:37.552464   16633 out.go:177]   - Using image docker.io/registry:2.8.1
	I0731 10:56:37.552516   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.558006   16633 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0731 10:56:37.556809   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 10:56:37.559302   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.559452   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 10:56:37.560911   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 10:56:37.559638   16633 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 10:56:37.559729   16633 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 10:56:37.561469   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.564337   16633 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-650980" context rescaled to 1 replicas
	I0731 10:56:37.564664   16633 addons.go:231] Setting addon default-storageclass=true in "addons-650980"
	I0731 10:56:37.565644   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:37.566129   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:37.566309   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0731 10:56:37.566356   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.570884   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 10:56:37.566623   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 10:56:37.566676   16633 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 10:56:37.577095   16633 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0731 10:56:37.573634   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.574907   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.581977   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 10:56:37.583337   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.584037   16633 out.go:177] * Verifying Kubernetes components...
	I0731 10:56:37.585476   16633 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0731 10:56:37.590485   16633 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0731 10:56:37.590508   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0731 10:56:37.590575   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.585603   16633 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:56:37.593888   16633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:56:37.592319   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 10:56:37.595570   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 10:56:37.599966   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.601444   16633 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:56:37.600665   16633 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 10:56:37.605856   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 10:56:37.605874   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 10:56:37.603934   16633 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 10:56:37.605931   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.605943   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0731 10:56:37.605998   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.606141   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.613266   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.620701   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.622482   16633 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 10:56:37.622507   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 10:56:37.622557   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:37.622955   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.628283   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.636398   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.637898   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.649688   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:37.744235   16633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 10:56:37.745914   16633 node_ready.go:35] waiting up to 6m0s for node "addons-650980" to be "Ready" ...
	I0731 10:56:37.930827   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 10:56:37.931727   16633 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 10:56:37.931809   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 10:56:37.948041   16633 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 10:56:37.948125   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 10:56:37.955609   16633 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 10:56:37.955631   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 10:56:38.031577   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 10:56:38.038084   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 10:56:38.047715   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 10:56:38.047741   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 10:56:38.053774   16633 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 10:56:38.053800   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 10:56:38.133068   16633 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 10:56:38.133099   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 10:56:38.144822   16633 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 10:56:38.144911   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 10:56:38.149297   16633 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 10:56:38.149371   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 10:56:38.231777   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 10:56:38.237760   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 10:56:38.238554   16633 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 10:56:38.238645   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 10:56:38.253206   16633 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 10:56:38.253279   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 10:56:38.362453   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 10:56:38.362481   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 10:56:38.364668   16633 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 10:56:38.364686   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 10:56:38.430746   16633 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 10:56:38.430773   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 10:56:38.435012   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 10:56:38.442658   16633 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 10:56:38.442738   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 10:56:38.531539   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 10:56:38.550755   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 10:56:38.550833   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 10:56:38.736071   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 10:56:38.739852   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 10:56:38.739890   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 10:56:38.834506   16633 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 10:56:38.834573   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 10:56:38.931803   16633 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:56:38.931831   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 10:56:39.040461   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 10:56:39.040489   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 10:56:39.145144   16633 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 10:56:39.145217   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 10:56:39.347845   16633 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 10:56:39.347873   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 10:56:39.430852   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:56:39.442325   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 10:56:39.442413   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 10:56:39.848796   16633 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 10:56:39.848871   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 10:56:39.854990   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:39.932658   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 10:56:39.932736   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 10:56:39.937786   16633 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.193504982s)
	I0731 10:56:39.937866   16633 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 10:56:40.139261   16633 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 10:56:40.139343   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 10:56:40.230877   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 10:56:40.230907   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 10:56:40.441699   16633 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 10:56:40.441772   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0731 10:56:40.735877   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 10:56:40.735919   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 10:56:40.745897   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 10:56:40.937740   16633 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 10:56:40.937770   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 10:56:41.231792   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 10:56:41.532642   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.601773334s)
	I0731 10:56:42.039216   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.007542148s)
	I0731 10:56:42.039362   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.001192364s)
	I0731 10:56:42.338005   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:43.540185   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.302345806s)
	I0731 10:56:43.540184   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.308295888s)
	I0731 10:56:43.540237   16633 addons.go:467] Verifying addon ingress=true in "addons-650980"
	I0731 10:56:43.540248   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.105206607s)
	I0731 10:56:43.541966   16633 out.go:177] * Verifying ingress addon...
	I0731 10:56:43.540309   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.008680751s)
	I0731 10:56:43.540341   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.80417846s)
	I0731 10:56:43.540460   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.109572292s)
	I0731 10:56:43.540521   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.794589728s)
	I0731 10:56:43.543420   16633 addons.go:467] Verifying addon registry=true in "addons-650980"
	I0731 10:56:43.543451   16633 addons.go:467] Verifying addon metrics-server=true in "addons-650980"
	I0731 10:56:43.544836   16633 out.go:177] * Verifying registry addon...
	W0731 10:56:43.543492   16633 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 10:56:43.544177   16633 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 10:56:43.546484   16633 retry.go:31] will retry after 180.402755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 10:56:43.547111   16633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 10:56:43.551232   16633 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 10:56:43.551258   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:43.551584   16633 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 10:56:43.551607   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:43.554546   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:43.555562   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:43.727561   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 10:56:44.058761   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:44.059737   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:44.440216   16633 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 10:56:44.440341   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:44.461665   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:44.637344   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:44.638818   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:44.839131   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:44.950934   16633 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 10:56:45.051786   16633 addons.go:231] Setting addon gcp-auth=true in "addons-650980"
	I0731 10:56:45.051841   16633 host.go:66] Checking if "addons-650980" exists ...
	I0731 10:56:45.052355   16633 cli_runner.go:164] Run: docker container inspect addons-650980 --format={{.State.Status}}
	I0731 10:56:45.071771   16633 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 10:56:45.071827   16633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-650980
	I0731 10:56:45.086937   16633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/addons-650980/id_rsa Username:docker}
	I0731 10:56:45.239817   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:45.241027   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:45.736721   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:45.737544   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:46.036259   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.804373107s)
	I0731 10:56:46.036347   16633 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-650980"
	I0731 10:56:46.038126   16633 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 10:56:46.041018   16633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 10:56:46.053182   16633 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 10:56:46.053209   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:46.140485   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:46.146298   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:46.147266   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:46.633485   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:46.635855   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:46.730998   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:47.145686   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:47.146719   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:47.152860   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:47.333219   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:47.633988   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:47.638004   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:47.656803   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:47.747871   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.020262789s)
	I0731 10:56:47.747996   16633 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.67619042s)
	I0731 10:56:47.750078   16633 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0731 10:56:47.751754   16633 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0731 10:56:47.753250   16633 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 10:56:47.753294   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 10:56:47.846723   16633 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 10:56:47.846806   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 10:56:47.942555   16633 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 10:56:47.942619   16633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0731 10:56:48.034418   16633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 10:56:48.133090   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:48.135751   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:48.152508   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:48.634210   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:48.634891   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:48.652851   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:49.060842   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:49.131554   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:49.152432   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:49.648434   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:49.649011   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:49.653464   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:49.763389   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:50.133331   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:50.133956   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:50.152640   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:50.558603   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:50.561171   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:50.651313   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:51.059367   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:51.061329   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:51.152109   16633 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.117652009s)
	I0731 10:56:51.153095   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:51.153472   16633 addons.go:467] Verifying addon gcp-auth=true in "addons-650980"
	I0731 10:56:51.155350   16633 out.go:177] * Verifying gcp-auth addon...
	I0731 10:56:51.158540   16633 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 10:56:51.231767   16633 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 10:56:51.231792   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:51.234868   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:51.558944   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:51.559663   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:51.652313   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:51.737960   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:52.058946   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:52.059381   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:52.152350   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:52.238943   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:52.262648   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:52.559077   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:52.559303   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:52.652011   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:52.738895   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:53.060767   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:53.061118   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:53.151987   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:53.238799   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:53.558483   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:53.559433   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:53.651309   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:53.738819   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:54.058916   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:54.059418   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:54.151647   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:54.238527   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:54.262784   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:54.558892   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:54.559291   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:54.651153   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:54.739339   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:55.058434   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:55.058920   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:55.151622   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:55.238428   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:55.558898   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:55.559239   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:55.651646   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:55.738182   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:56.058956   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:56.059221   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:56.151434   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:56.238168   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:56.558255   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:56.558765   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:56.652008   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:56.738707   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:56.763505   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:57.059281   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:57.059480   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:57.151692   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:57.238093   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:57.558117   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:57.559077   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:57.651340   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:57.737976   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:58.059023   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:58.059297   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:58.151428   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:58.237888   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:58.558940   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:58.559130   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:58.651519   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:58.738221   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:59.058203   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:59.058882   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:59.152542   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:59.237849   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:56:59.263205   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:56:59.559083   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:56:59.559545   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:56:59.651580   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:56:59.738311   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:00.058194   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:00.058995   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:00.151032   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:00.238014   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:00.558603   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:00.558848   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:00.651072   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:00.738496   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:01.058878   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:01.059141   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:01.151468   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:01.238043   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:01.560378   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:01.560900   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:01.651312   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:01.737865   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:01.762244   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:57:02.059022   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:02.059394   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:02.151996   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:02.238606   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:02.558831   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:02.559262   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:02.651482   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:02.737978   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:03.058831   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:03.059004   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:03.151421   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:03.237821   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:03.559208   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:03.561119   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:03.651070   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:03.738760   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:03.763164   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:57:04.059246   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:04.059692   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:04.151746   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:04.238553   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:04.558765   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:04.559402   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:04.651519   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:04.738272   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:05.058581   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:05.059053   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:05.151164   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:05.237920   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:05.559241   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:05.559373   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:05.651835   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:05.738529   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:06.058934   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:06.059435   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:06.151864   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:06.238632   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:06.263090   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:57:06.559249   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:06.559501   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:06.651640   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:06.738422   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:07.058527   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:07.058955   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:07.150850   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:07.238484   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:07.558894   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:07.559085   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:07.651565   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:07.738140   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:08.058870   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:08.059305   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:08.151223   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:08.238746   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:08.559431   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:08.560405   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:08.652141   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:08.738670   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:08.763057   16633 node_ready.go:58] node "addons-650980" has status "Ready":"False"
	I0731 10:57:09.058458   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:09.059139   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:09.152826   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:09.238453   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:09.558397   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:09.559184   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:09.651198   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:09.738639   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:10.058574   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:10.059417   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:10.151628   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:10.238986   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:10.558577   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:10.558794   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:10.653374   16633 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 10:57:10.653456   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:10.756414   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:10.764215   16633 node_ready.go:49] node "addons-650980" has status "Ready":"True"
	I0731 10:57:10.764238   16633 node_ready.go:38] duration metric: took 33.018297799s waiting for node "addons-650980" to be "Ready" ...
	I0731 10:57:10.764247   16633 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:57:10.841742   16633 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pvmrs" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:11.059241   16633 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 10:57:11.059261   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:11.059378   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:11.152462   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:11.238973   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:11.559678   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:11.559953   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:11.654408   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:11.738202   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:12.059075   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:12.060038   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:12.153689   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:12.239592   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:12.436219   16633 pod_ready.go:92] pod "coredns-5d78c9869d-pvmrs" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:12.436245   16633 pod_ready.go:81] duration metric: took 1.594469526s waiting for pod "coredns-5d78c9869d-pvmrs" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.436271   16633 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.442057   16633 pod_ready.go:92] pod "etcd-addons-650980" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:12.442079   16633 pod_ready.go:81] duration metric: took 5.799795ms waiting for pod "etcd-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.442093   16633 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.447989   16633 pod_ready.go:92] pod "kube-apiserver-addons-650980" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:12.448009   16633 pod_ready.go:81] duration metric: took 5.90777ms waiting for pod "kube-apiserver-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.448020   16633 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.453618   16633 pod_ready.go:92] pod "kube-controller-manager-addons-650980" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:12.453680   16633 pod_ready.go:81] duration metric: took 5.651933ms waiting for pod "kube-controller-manager-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.453705   16633 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jnjmk" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.567731   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:12.569120   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:12.655551   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:12.739116   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:12.764172   16633 pod_ready.go:92] pod "kube-proxy-jnjmk" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:12.764197   16633 pod_ready.go:81] duration metric: took 310.474671ms waiting for pod "kube-proxy-jnjmk" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:12.764210   16633 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:13.060294   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:13.060503   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:13.153342   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:13.164005   16633 pod_ready.go:92] pod "kube-scheduler-addons-650980" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:13.164025   16633 pod_ready.go:81] duration metric: took 399.80733ms waiting for pod "kube-scheduler-addons-650980" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:13.164037   16633 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:13.238969   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:13.560262   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:13.560622   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:13.653010   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:13.738651   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:14.060177   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:14.060240   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:14.153385   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:14.238309   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:14.559909   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:14.561451   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:14.654011   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:14.739404   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:15.060394   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:15.060461   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:15.152554   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:15.239006   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:15.469861   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:15.559422   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:15.559439   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:15.652755   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:15.738257   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:16.058660   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:16.058968   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:16.152038   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:16.238837   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:16.559861   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:16.559959   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:16.652386   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:16.738643   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:17.059612   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:17.059853   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:17.152823   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:17.237813   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:17.543936   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:17.633822   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:17.635904   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:17.654410   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:17.741393   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:18.060036   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:18.060904   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:18.153331   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:18.238686   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:18.559646   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:18.559957   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:18.653355   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:18.738968   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:19.059830   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:19.059977   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:19.154359   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:19.239494   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:19.559655   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:19.560043   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:19.653423   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:19.738968   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:19.970898   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:20.059775   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:20.059808   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:20.152923   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:20.239011   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:20.560054   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:20.560470   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:20.653558   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:20.738952   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:21.060246   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:21.060359   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:21.152798   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:21.239375   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:21.560078   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:21.560160   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:21.653459   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:21.738907   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:21.971022   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:22.059694   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:22.060182   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:22.152833   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:22.238439   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:22.559287   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:22.559434   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:22.653061   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:22.739012   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:23.059127   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:23.059405   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:23.154740   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:23.239613   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:23.559901   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:23.560872   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:23.654000   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:23.739230   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:24.059745   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:24.060067   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:24.152873   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:24.238862   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:24.469885   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:24.559262   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:24.559557   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:24.653305   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:24.738973   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:25.059445   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:25.059987   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:25.153706   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:25.238799   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:25.631723   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:25.632146   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:25.658667   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:25.857076   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:26.059337   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:26.059466   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:26.163466   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:26.238790   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:26.470367   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:26.559636   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:26.560694   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:26.653813   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:26.742046   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:27.059380   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:27.059671   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:27.153301   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:27.238074   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:27.559621   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:27.559763   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:27.653164   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:27.738643   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:28.059447   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:28.059536   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:28.154310   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:28.238151   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:28.470404   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:28.559952   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:28.560028   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:28.652615   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:28.738962   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:29.059529   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:29.059669   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:29.152814   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:29.239048   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:29.561900   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:29.563053   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:29.652105   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:29.738457   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:30.059980   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:30.060027   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:30.152793   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:30.238187   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:30.472807   16633 pod_ready.go:102] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"False"
	I0731 10:57:30.559435   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:30.559569   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:30.653293   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:30.739375   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:31.060537   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:31.060895   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:31.152693   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:31.238930   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:31.469831   16633 pod_ready.go:92] pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace has status "Ready":"True"
	I0731 10:57:31.469851   16633 pod_ready.go:81] duration metric: took 18.305807337s waiting for pod "metrics-server-844d8db974-vxkfw" in "kube-system" namespace to be "Ready" ...
	I0731 10:57:31.469870   16633 pod_ready.go:38] duration metric: took 20.705611997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 10:57:31.469884   16633 api_server.go:52] waiting for apiserver process to appear ...
	I0731 10:57:31.469924   16633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 10:57:31.482043   16633 api_server.go:72] duration metric: took 53.908379767s to wait for apiserver process to appear ...
	I0731 10:57:31.482067   16633 api_server.go:88] waiting for apiserver healthz status ...
	I0731 10:57:31.482086   16633 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 10:57:31.486959   16633 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 10:57:31.487872   16633 api_server.go:141] control plane version: v1.27.3
	I0731 10:57:31.487912   16633 api_server.go:131] duration metric: took 5.836743ms to wait for apiserver health ...
	I0731 10:57:31.487922   16633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 10:57:31.495284   16633 system_pods.go:59] 18 kube-system pods found
	I0731 10:57:31.495309   16633 system_pods.go:61] "coredns-5d78c9869d-pvmrs" [d1635902-bca0-4d47-a5fc-3eb33dd6ed56] Running
	I0731 10:57:31.495314   16633 system_pods.go:61] "csi-hostpath-attacher-0" [8c05c549-63fe-43b2-930d-b0f20df35ba5] Running
	I0731 10:57:31.495318   16633 system_pods.go:61] "csi-hostpath-resizer-0" [cf404be3-bf1e-4ead-b5da-7345c9e52a0c] Running
	I0731 10:57:31.495325   16633 system_pods.go:61] "csi-hostpathplugin-dxbc8" [c614f775-c6f1-419d-ab2a-da1e4c16597c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 10:57:31.495331   16633 system_pods.go:61] "etcd-addons-650980" [43519bb2-62de-444a-b919-a6624c467753] Running
	I0731 10:57:31.495335   16633 system_pods.go:61] "kindnet-twvwd" [cb69138f-9212-4d9c-9b23-8dee77959cd9] Running
	I0731 10:57:31.495339   16633 system_pods.go:61] "kube-apiserver-addons-650980" [e17c3a6e-da1a-4a66-a2bb-208c4fcbdcaf] Running
	I0731 10:57:31.495343   16633 system_pods.go:61] "kube-controller-manager-addons-650980" [aaf7f30b-3e32-42f9-a837-45e2e08ec423] Running
	I0731 10:57:31.495348   16633 system_pods.go:61] "kube-ingress-dns-minikube" [bf7a826a-362b-4dc4-baab-fb1d9c417b91] Running
	I0731 10:57:31.495356   16633 system_pods.go:61] "kube-proxy-jnjmk" [d3c2fd89-80ba-4809-baaa-b61684493a09] Running
	I0731 10:57:31.495361   16633 system_pods.go:61] "kube-scheduler-addons-650980" [ec6da85e-fa46-49d5-83ef-fb845cd87d53] Running
	I0731 10:57:31.495367   16633 system_pods.go:61] "metrics-server-844d8db974-vxkfw" [c5548274-ef0a-43e9-b8c4-8bd2a19fca62] Running
	I0731 10:57:31.495373   16633 system_pods.go:61] "registry-proxy-b5pm4" [078f5480-bed3-4b63-b9f7-1c4c8b23ba61] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 10:57:31.495380   16633 system_pods.go:61] "registry-x5bdf" [6962bb94-2f59-41ec-bca8-678f12148c60] Running
	I0731 10:57:31.495385   16633 system_pods.go:61] "snapshot-controller-75bbb956b9-22th2" [1d476e4c-810b-4562-96a8-f769ca6eb3fe] Running
	I0731 10:57:31.495392   16633 system_pods.go:61] "snapshot-controller-75bbb956b9-ddmmb" [79e5d1d5-37b3-47de-baed-0d331d9af636] Running
	I0731 10:57:31.495397   16633 system_pods.go:61] "storage-provisioner" [4a0bfec7-485f-48f7-ba85-092ffa1fe7c7] Running
	I0731 10:57:31.495403   16633 system_pods.go:61] "tiller-deploy-6847666dc-tdbg5" [b35bc853-fbe0-4fb0-8648-4be6b0ffbac4] Running
	I0731 10:57:31.495408   16633 system_pods.go:74] duration metric: took 7.47949ms to wait for pod list to return data ...
	I0731 10:57:31.495416   16633 default_sa.go:34] waiting for default service account to be created ...
	I0731 10:57:31.497386   16633 default_sa.go:45] found service account: "default"
	I0731 10:57:31.497402   16633 default_sa.go:55] duration metric: took 1.98135ms for default service account to be created ...
	I0731 10:57:31.497409   16633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 10:57:31.506267   16633 system_pods.go:86] 18 kube-system pods found
	I0731 10:57:31.506331   16633 system_pods.go:89] "coredns-5d78c9869d-pvmrs" [d1635902-bca0-4d47-a5fc-3eb33dd6ed56] Running
	I0731 10:57:31.506350   16633 system_pods.go:89] "csi-hostpath-attacher-0" [8c05c549-63fe-43b2-930d-b0f20df35ba5] Running
	I0731 10:57:31.506365   16633 system_pods.go:89] "csi-hostpath-resizer-0" [cf404be3-bf1e-4ead-b5da-7345c9e52a0c] Running
	I0731 10:57:31.506386   16633 system_pods.go:89] "csi-hostpathplugin-dxbc8" [c614f775-c6f1-419d-ab2a-da1e4c16597c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 10:57:31.506405   16633 system_pods.go:89] "etcd-addons-650980" [43519bb2-62de-444a-b919-a6624c467753] Running
	I0731 10:57:31.506428   16633 system_pods.go:89] "kindnet-twvwd" [cb69138f-9212-4d9c-9b23-8dee77959cd9] Running
	I0731 10:57:31.506444   16633 system_pods.go:89] "kube-apiserver-addons-650980" [e17c3a6e-da1a-4a66-a2bb-208c4fcbdcaf] Running
	I0731 10:57:31.506458   16633 system_pods.go:89] "kube-controller-manager-addons-650980" [aaf7f30b-3e32-42f9-a837-45e2e08ec423] Running
	I0731 10:57:31.506475   16633 system_pods.go:89] "kube-ingress-dns-minikube" [bf7a826a-362b-4dc4-baab-fb1d9c417b91] Running
	I0731 10:57:31.506489   16633 system_pods.go:89] "kube-proxy-jnjmk" [d3c2fd89-80ba-4809-baaa-b61684493a09] Running
	I0731 10:57:31.506505   16633 system_pods.go:89] "kube-scheduler-addons-650980" [ec6da85e-fa46-49d5-83ef-fb845cd87d53] Running
	I0731 10:57:31.506527   16633 system_pods.go:89] "metrics-server-844d8db974-vxkfw" [c5548274-ef0a-43e9-b8c4-8bd2a19fca62] Running
	I0731 10:57:31.506577   16633 system_pods.go:89] "registry-proxy-b5pm4" [078f5480-bed3-4b63-b9f7-1c4c8b23ba61] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 10:57:31.506595   16633 system_pods.go:89] "registry-x5bdf" [6962bb94-2f59-41ec-bca8-678f12148c60] Running
	I0731 10:57:31.506620   16633 system_pods.go:89] "snapshot-controller-75bbb956b9-22th2" [1d476e4c-810b-4562-96a8-f769ca6eb3fe] Running
	I0731 10:57:31.506638   16633 system_pods.go:89] "snapshot-controller-75bbb956b9-ddmmb" [79e5d1d5-37b3-47de-baed-0d331d9af636] Running
	I0731 10:57:31.506652   16633 system_pods.go:89] "storage-provisioner" [4a0bfec7-485f-48f7-ba85-092ffa1fe7c7] Running
	I0731 10:57:31.506667   16633 system_pods.go:89] "tiller-deploy-6847666dc-tdbg5" [b35bc853-fbe0-4fb0-8648-4be6b0ffbac4] Running
	I0731 10:57:31.506683   16633 system_pods.go:126] duration metric: took 9.268651ms to wait for k8s-apps to be running ...
	I0731 10:57:31.506701   16633 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 10:57:31.506753   16633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 10:57:31.517389   16633 system_svc.go:56] duration metric: took 10.68263ms WaitForService to wait for kubelet.
	I0731 10:57:31.517411   16633 kubeadm.go:581] duration metric: took 53.943754337s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 10:57:31.517437   16633 node_conditions.go:102] verifying NodePressure condition ...
	I0731 10:57:31.520282   16633 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0731 10:57:31.520310   16633 node_conditions.go:123] node cpu capacity is 8
	I0731 10:57:31.520326   16633 node_conditions.go:105] duration metric: took 2.882776ms to run NodePressure ...
	I0731 10:57:31.520339   16633 start.go:228] waiting for startup goroutines ...
	I0731 10:57:31.559257   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:31.559703   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:31.654324   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:31.738653   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:32.059958   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:32.060590   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:32.153109   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:32.239040   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:32.559538   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:32.560170   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:32.653178   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:32.738592   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:33.059910   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:33.059973   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:33.152834   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:33.240395   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:33.562712   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:33.562803   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:33.653613   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:33.739237   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:34.059837   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:34.060052   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:34.154175   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:34.238771   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:34.559777   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:34.560157   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:34.653020   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:34.738397   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:35.133019   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:35.134233   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 10:57:35.154147   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:35.240844   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:35.634400   16633 kapi.go:107] duration metric: took 52.087280446s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 10:57:35.643171   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:35.654237   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:35.738970   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:36.060176   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:36.154284   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:36.238625   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:36.633440   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:36.653441   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:36.739430   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:37.060639   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:37.153343   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:37.238637   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:37.560324   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:37.652713   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:37.739223   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:38.059711   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:38.153020   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:38.238984   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:38.560548   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:38.653167   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:38.738921   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:39.059632   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:39.152831   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:39.238331   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:39.559778   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:39.653574   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:39.738724   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:40.060021   16633 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 10:57:40.153022   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:40.239180   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:40.559484   16633 kapi.go:107] duration metric: took 57.015301745s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 10:57:40.652863   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:40.738464   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:41.152191   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:41.238791   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:41.653042   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:41.739351   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:42.156570   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:42.238347   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:42.653706   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:42.739184   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:43.153732   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:43.239082   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:43.653885   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:43.738902   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:44.153325   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:44.239191   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:44.653887   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:44.738604   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:45.234226   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:45.237967   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 10:57:45.653567   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:45.739056   16633 kapi.go:107] duration metric: took 54.580510386s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 10:57:45.741140   16633 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-650980 cluster.
	I0731 10:57:45.742702   16633 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 10:57:45.744216   16633 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 10:57:46.153379   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:46.652595   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:47.153424   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:47.652428   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:48.152453   16633 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 10:57:48.654317   16633 kapi.go:107] duration metric: took 1m2.613295729s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 10:57:48.656056   16633 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, default-storageclass, helm-tiller, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0731 10:57:48.657409   16633 addons.go:502] enable addons completed in 1m11.160144984s: enabled=[ingress-dns storage-provisioner cloud-spanner default-storageclass helm-tiller inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0731 10:57:48.657436   16633 start.go:233] waiting for cluster config update ...
	I0731 10:57:48.657450   16633 start.go:242] writing updated cluster config ...
	I0731 10:57:48.657678   16633 ssh_runner.go:195] Run: rm -f paused
	I0731 10:57:48.705440   16633 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 10:57:48.707365   16633 out.go:177] * Done! kubectl is now configured to use "addons-650980" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 11:00:24 addons-650980 crio[946]: time="2023-07-31 11:00:24.937143954Z" level=info msg="Removing container: fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c" id=af473009-660c-4f1b-b5e9-f6b0aa235009 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:00:24 addons-650980 crio[946]: time="2023-07-31 11:00:24.953802851Z" level=info msg="Removed container fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=af473009-660c-4f1b-b5e9-f6b0aa235009 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.355335021Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=4781c46b-a554-42ea-b292-3aa3fbe58dd6 name=/runtime.v1.ImageService/PullImage
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.356229830Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=357be5cf-62ae-405d-a67a-5afb447e3857 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.356876459Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=357be5cf-62ae-405d-a67a-5afb447e3857 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.357646473Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-75lsc/hello-world-app" id=f69378a5-fb74-41c4-ac8e-6582d4b30c91 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.357737605Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.440024575Z" level=info msg="Created container 8750bc8cd86bc6fa689a1f6d48861b948fe060248f3cf7f7549f366552cb96e0: default/hello-world-app-65bdb79f98-75lsc/hello-world-app" id=f69378a5-fb74-41c4-ac8e-6582d4b30c91 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.440637559Z" level=info msg="Starting container: 8750bc8cd86bc6fa689a1f6d48861b948fe060248f3cf7f7549f366552cb96e0" id=82d08a1e-2c07-4a84-8eea-25935d1f9db0 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.450271730Z" level=info msg="Started container" PID=9639 containerID=8750bc8cd86bc6fa689a1f6d48861b948fe060248f3cf7f7549f366552cb96e0 description=default/hello-world-app-65bdb79f98-75lsc/hello-world-app id=82d08a1e-2c07-4a84-8eea-25935d1f9db0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0c816cf13c843a82c6e8e435050a18240ee80b7e96ffad6f37abd822f933938
	Jul 31 11:00:25 addons-650980 crio[946]: time="2023-07-31 11:00:25.564108734Z" level=info msg="Stopping container: 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4 (timeout: 1s)" id=1d969a06-6fd3-4dde-bd99-8e5109749895 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.573622624Z" level=warning msg="Stopping container 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=1d969a06-6fd3-4dde-bd99-8e5109749895 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:00:26 addons-650980 conmon[5570]: conmon 75f337e8d81eac139a87 <ninfo>: container 5582 exited with status 137
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.718505453Z" level=info msg="Stopped container 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4: ingress-nginx/ingress-nginx-controller-7799c6795f-p2xzl/controller" id=1d969a06-6fd3-4dde-bd99-8e5109749895 name=/runtime.v1.RuntimeService/StopContainer
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.719049545Z" level=info msg="Stopping pod sandbox: de78ad7ba80faca439ba7fa4ecfd9551e3490afe63eb2023863c9662827f41ce" id=939f4a43-8572-4087-904e-a2678f7a6c7e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.722104522Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-6MRRT7DBWR6JZ255 - [0:0]\n:KUBE-HP-7L5FUCOVT5ZY4MZD - [0:0]\n-X KUBE-HP-6MRRT7DBWR6JZ255\n-X KUBE-HP-7L5FUCOVT5ZY4MZD\nCOMMIT\n"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.723396821Z" level=info msg="Closing host port tcp:80"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.723441851Z" level=info msg="Closing host port tcp:443"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.724955594Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.724980090Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.725182140Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-p2xzl Namespace:ingress-nginx ID:de78ad7ba80faca439ba7fa4ecfd9551e3490afe63eb2023863c9662827f41ce UID:26e46fbb-3d61-4239-bf94-10b36baf3578 NetNS:/var/run/netns/37f70053-1164-4186-91ec-5fecc579c4ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.725350181Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-p2xzl from CNI network \"kindnet\" (type=ptp)"
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.761213859Z" level=info msg="Stopped pod sandbox: de78ad7ba80faca439ba7fa4ecfd9551e3490afe63eb2023863c9662827f41ce" id=939f4a43-8572-4087-904e-a2678f7a6c7e name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.943905416Z" level=info msg="Removing container: 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4" id=a2cf5a41-67b5-452d-bf91-147049890a15 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 31 11:00:26 addons-650980 crio[946]: time="2023-07-31 11:00:26.959429155Z" level=info msg="Removed container 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4: ingress-nginx/ingress-nginx-controller-7799c6795f-p2xzl/controller" id=a2cf5a41-67b5-452d-bf91-147049890a15 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8750bc8cd86bc       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   a0c816cf13c84       hello-world-app-65bdb79f98-75lsc
	72e0d008d4a71       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   c7e05de0d7451       nginx
	59f9da720dc41       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   75a7364f7ad28       headlamp-66f6498c69-jqbhg
	5f6138a6835bc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   69fbc258607c7       gcp-auth-58478865f7-5wpvw
	a3a8498c4629f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   b9383091c629c       ingress-nginx-admission-patch-xnmx6
	eb56a94aa97a5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   c5ac363641525       ingress-nginx-admission-create-bwdtp
	90718e0f117f6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   5df005ef8d194       coredns-5d78c9869d-pvmrs
	3005d3388b9ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   87204ff3445ac       storage-provisioner
	5d3753ecb9508       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   e96844dfbbde4       kindnet-twvwd
	6b0d630fb837d       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             3 minutes ago       Running             kube-proxy                0                   dddbd623f1443       kube-proxy-jnjmk
	cc5fea6087a58       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   cf88cbb967624       kube-apiserver-addons-650980
	0020f09ae7def       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   2fc69c6865af0       kube-scheduler-addons-650980
	2f6cbaed3ff8c       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   6f40b92e952db       kube-controller-manager-addons-650980
	bc898411450be       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   495305052e431       etcd-addons-650980
	
	* 
	* ==> coredns [90718e0f117f6e48eefb3c08eb5e4fc6bef6324a60a8c2cbabbb6468cba38b1f] <==
	* [INFO] 10.244.0.15:48016 - 18833 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046524s
	[INFO] 10.244.0.15:43951 - 12419 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00583382s
	[INFO] 10.244.0.15:43951 - 26502 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006438035s
	[INFO] 10.244.0.15:50717 - 65485 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004199671s
	[INFO] 10.244.0.15:50717 - 19650 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005220949s
	[INFO] 10.244.0.15:52760 - 59605 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005216068s
	[INFO] 10.244.0.15:52760 - 12779 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006079288s
	[INFO] 10.244.0.15:47617 - 62483 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000089528s
	[INFO] 10.244.0.15:47617 - 15639 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000092995s
	[INFO] 10.244.0.18:55529 - 38853 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014688s
	[INFO] 10.244.0.18:54070 - 59139 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000203208s
	[INFO] 10.244.0.18:40482 - 57354 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000413506s
	[INFO] 10.244.0.18:50066 - 48382 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000560995s
	[INFO] 10.244.0.18:60557 - 7276 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000422123s
	[INFO] 10.244.0.18:60859 - 60318 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00046909s
	[INFO] 10.244.0.18:50388 - 7013 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008983909s
	[INFO] 10.244.0.18:33054 - 50857 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.009187069s
	[INFO] 10.244.0.18:56375 - 10052 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008030774s
	[INFO] 10.244.0.18:42557 - 53954 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008477732s
	[INFO] 10.244.0.18:35135 - 50268 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006464604s
	[INFO] 10.244.0.18:34468 - 51641 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007261505s
	[INFO] 10.244.0.18:39390 - 25306 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000679932s
	[INFO] 10.244.0.18:50052 - 2716 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.0007696s
	[INFO] 10.244.0.21:52243 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101377s
	[INFO] 10.244.0.21:44060 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076364s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-650980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-650980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=addons-650980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T10_56_24_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-650980
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 10:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-650980
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:00:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 10:58:57 +0000   Mon, 31 Jul 2023 10:56:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 10:58:57 +0000   Mon, 31 Jul 2023 10:56:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 10:58:57 +0000   Mon, 31 Jul 2023 10:56:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 10:58:57 +0000   Mon, 31 Jul 2023 10:57:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-650980
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 de27663c99cd4a9e9d79c5e8827da2aa
	  System UUID:                3aebde1f-c663-497f-b49d-76dc477803c4
	  Boot ID:                    c4e7adf1-530e-4fca-8214-6daedbc0c53f
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-75lsc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-58478865f7-5wpvw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  headlamp                    headlamp-66f6498c69-jqbhg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-5d78c9869d-pvmrs                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m56s
	  kube-system                 etcd-addons-650980                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-twvwd                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m56s
	  kube-system                 kube-apiserver-addons-650980             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-650980    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-jnjmk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-scheduler-addons-650980             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m52s  kube-proxy       
	  Normal  Starting                 4m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet          Node addons-650980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet          Node addons-650980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet          Node addons-650980 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m56s  node-controller  Node addons-650980 event: Registered Node addons-650980 in Controller
	  Normal  NodeReady                3m23s  kubelet          Node addons-650980 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.010664] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.006722] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001809] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002355] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.002481] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.002874] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001116] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001176] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001142] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001109] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.825406] kauditd_printk_skb: 36 callbacks suppressed
	[Jul31 10:58] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[  +1.008314] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[  +2.015806] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[  +4.127611] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[  +8.191195] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[ +16.126431] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	[Jul31 10:59] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 24 c8 40 80 98 8a 77 33 6e db fc 08 00
	
	* 
	* ==> etcd [bc898411450be40cd890b4302c4f8b243896cff5d6a3d1f41ffebad0a4e17edb] <==
	* {"level":"info","ts":"2023-07-31T10:56:40.745Z","caller":"traceutil/trace.go:171","msg":"trace[1133781745] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:397; }","duration":"103.590254ms","start":"2023-07-31T10:56:40.641Z","end":"2023-07-31T10:56:40.745Z","steps":["trace[1133781745] 'agreement among raft nodes before linearized reading'  (duration: 103.289862ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:40.745Z","caller":"traceutil/trace.go:171","msg":"trace[589713652] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"107.525447ms","start":"2023-07-31T10:56:40.638Z","end":"2023-07-31T10:56:40.745Z","steps":["trace[589713652] 'process raft request'  (duration: 103.620781ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:40.746Z","caller":"traceutil/trace.go:171","msg":"trace[1753628885] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"102.040097ms","start":"2023-07-31T10:56:40.643Z","end":"2023-07-31T10:56:40.745Z","steps":["trace[1753628885] 'process raft request'  (duration: 98.092026ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.234Z","caller":"traceutil/trace.go:171","msg":"trace[1321555585] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"187.857327ms","start":"2023-07-31T10:56:41.046Z","end":"2023-07-31T10:56:41.234Z","steps":["trace[1321555585] 'process raft request'  (duration: 187.761152ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.234Z","caller":"traceutil/trace.go:171","msg":"trace[1251843857] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"185.464192ms","start":"2023-07-31T10:56:41.049Z","end":"2023-07-31T10:56:41.234Z","steps":["trace[1251843857] 'read index received'  (duration: 185.460041ms)","trace[1251843857] 'applied index is now lower than readState.Index'  (duration: 2.948µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T10:56:41.235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.603024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-31T10:56:41.235Z","caller":"traceutil/trace.go:171","msg":"trace[1569558865] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:402; }","duration":"185.652234ms","start":"2023-07-31T10:56:41.049Z","end":"2023-07-31T10:56:41.235Z","steps":["trace[1569558865] 'agreement among raft nodes before linearized reading'  (duration: 185.532266ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.237Z","caller":"traceutil/trace.go:171","msg":"trace[1981719171] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"104.245285ms","start":"2023-07-31T10:56:41.133Z","end":"2023-07-31T10:56:41.237Z","steps":["trace[1981719171] 'process raft request'  (duration: 103.375661ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.238Z","caller":"traceutil/trace.go:171","msg":"trace[1491285527] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"107.526459ms","start":"2023-07-31T10:56:41.130Z","end":"2023-07-31T10:56:41.238Z","steps":["trace[1491285527] 'process raft request'  (duration: 106.403697ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.238Z","caller":"traceutil/trace.go:171","msg":"trace[630990135] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"104.412478ms","start":"2023-07-31T10:56:41.133Z","end":"2023-07-31T10:56:41.238Z","steps":["trace[630990135] 'process raft request'  (duration: 103.163161ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:41.238Z","caller":"traceutil/trace.go:171","msg":"trace[1739817352] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"103.314999ms","start":"2023-07-31T10:56:41.135Z","end":"2023-07-31T10:56:41.238Z","steps":["trace[1739817352] 'process raft request'  (duration: 101.853861ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:56:45.833Z","caller":"traceutil/trace.go:171","msg":"trace[1278436976] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"102.944177ms","start":"2023-07-31T10:56:45.730Z","end":"2023-07-31T10:56:45.833Z","steps":["trace[1278436976] 'process raft request'  (duration: 102.49179ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:57:25.449Z","caller":"traceutil/trace.go:171","msg":"trace[808186249] linearizableReadLoop","detail":"{readStateIndex:925; appliedIndex:924; }","duration":"114.826694ms","start":"2023-07-31T10:57:25.334Z","end":"2023-07-31T10:57:25.449Z","steps":["trace[808186249] 'read index received'  (duration: 31.482167ms)","trace[808186249] 'applied index is now lower than readState.Index'  (duration: 83.343689ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-31T10:57:25.449Z","caller":"traceutil/trace.go:171","msg":"trace[2052555168] transaction","detail":"{read_only:false; response_revision:897; number_of_response:1; }","duration":"115.04691ms","start":"2023-07-31T10:57:25.334Z","end":"2023-07-31T10:57:25.449Z","steps":["trace[2052555168] 'process raft request'  (duration: 95.996047ms)","trace[2052555168] 'compare'  (duration: 18.910178ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T10:57:25.449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.030023ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-844d8db974-vxkfw\" ","response":"range_response_count:1 size:4261"}
	{"level":"info","ts":"2023-07-31T10:57:25.449Z","caller":"traceutil/trace.go:171","msg":"trace[1977766600] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-844d8db974-vxkfw; range_end:; response_count:1; response_revision:897; }","duration":"115.07924ms","start":"2023-07-31T10:57:25.334Z","end":"2023-07-31T10:57:25.449Z","steps":["trace[1977766600] 'agreement among raft nodes before linearized reading'  (duration: 114.915162ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:57:25.854Z","caller":"traceutil/trace.go:171","msg":"trace[1986579483] linearizableReadLoop","detail":"{readStateIndex:932; appliedIndex:931; }","duration":"117.201143ms","start":"2023-07-31T10:57:25.737Z","end":"2023-07-31T10:57:25.854Z","steps":["trace[1986579483] 'read index received'  (duration: 54.95099ms)","trace[1986579483] 'applied index is now lower than readState.Index'  (duration: 62.249689ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-31T10:57:25.854Z","caller":"traceutil/trace.go:171","msg":"trace[146163718] transaction","detail":"{read_only:false; response_revision:904; number_of_response:1; }","duration":"118.558612ms","start":"2023-07-31T10:57:25.736Z","end":"2023-07-31T10:57:25.854Z","steps":["trace[146163718] 'process raft request'  (duration: 56.300741ms)","trace[146163718] 'compare'  (duration: 62.090166ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-31T10:57:25.854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.352814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11275"}
	{"level":"info","ts":"2023-07-31T10:57:25.854Z","caller":"traceutil/trace.go:171","msg":"trace[192963695] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:904; }","duration":"117.398479ms","start":"2023-07-31T10:57:25.737Z","end":"2023-07-31T10:57:25.854Z","steps":["trace[192963695] 'agreement among raft nodes before linearized reading'  (duration: 117.296303ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-31T10:57:58.632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.492539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:5015"}
	{"level":"info","ts":"2023-07-31T10:57:58.632Z","caller":"traceutil/trace.go:171","msg":"trace[799827571] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:1170; }","duration":"101.590591ms","start":"2023-07-31T10:57:58.530Z","end":"2023-07-31T10:57:58.632Z","steps":["trace[799827571] 'range keys from in-memory index tree'  (duration: 101.382654ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-31T10:57:58.632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.830664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-31T10:57:58.632Z","caller":"traceutil/trace.go:171","msg":"trace[1694306357] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1170; }","duration":"101.862518ms","start":"2023-07-31T10:57:58.530Z","end":"2023-07-31T10:57:58.632Z","steps":["trace[1694306357] 'range keys from in-memory index tree'  (duration: 101.764526ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T10:57:58.757Z","caller":"traceutil/trace.go:171","msg":"trace[922175659] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1171; }","duration":"123.635985ms","start":"2023-07-31T10:57:58.633Z","end":"2023-07-31T10:57:58.757Z","steps":["trace[922175659] 'process raft request'  (duration: 123.531605ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [5f6138a6835bc2fe448365a77fa1a384c6a790ced63b38f310c9cdb2587933d7] <==
	* 2023/07/31 10:57:44 GCP Auth Webhook started!
	2023/07/31 10:57:53 Ready to marshal response ...
	2023/07/31 10:57:53 Ready to write response ...
	2023/07/31 10:57:55 Ready to marshal response ...
	2023/07/31 10:57:55 Ready to write response ...
	2023/07/31 10:57:55 Ready to marshal response ...
	2023/07/31 10:57:55 Ready to write response ...
	2023/07/31 10:57:55 Ready to marshal response ...
	2023/07/31 10:57:55 Ready to write response ...
	2023/07/31 10:57:58 Ready to marshal response ...
	2023/07/31 10:57:58 Ready to write response ...
	2023/07/31 10:58:03 Ready to marshal response ...
	2023/07/31 10:58:03 Ready to write response ...
	2023/07/31 10:58:39 Ready to marshal response ...
	2023/07/31 10:58:39 Ready to write response ...
	2023/07/31 10:58:59 Ready to marshal response ...
	2023/07/31 10:58:59 Ready to write response ...
	2023/07/31 11:00:23 Ready to marshal response ...
	2023/07/31 11:00:23 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:00:33 up 43 min,  0 users,  load average: 0.32, 0.42, 0.21
	Linux addons-650980 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [5d3753ecb9508e159baa5e01387339a8b74ee6c54b11b0d5a52f2861bf0eacae] <==
	* I0731 10:58:30.270982       1 main.go:227] handling current node
	I0731 10:58:40.275268       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:58:40.275298       1 main.go:227] handling current node
	I0731 10:58:50.287121       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:58:50.287145       1 main.go:227] handling current node
	I0731 10:59:00.300208       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:00.300230       1 main.go:227] handling current node
	I0731 10:59:10.312227       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:10.312250       1 main.go:227] handling current node
	I0731 10:59:20.323294       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:20.323318       1 main.go:227] handling current node
	I0731 10:59:30.335093       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:30.335118       1 main.go:227] handling current node
	I0731 10:59:40.347191       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:40.347213       1 main.go:227] handling current node
	I0731 10:59:50.359192       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 10:59:50.359216       1 main.go:227] handling current node
	I0731 11:00:00.371078       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:00:00.371100       1 main.go:227] handling current node
	I0731 11:00:10.374798       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:00:10.374821       1 main.go:227] handling current node
	I0731 11:00:20.385592       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:00:20.385618       1 main.go:227] handling current node
	I0731 11:00:30.389061       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:00:30.389091       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [cc5fea6087a58a60d5e50fd2ca11599a719519175fcd3cee0f5bbbc5cf80a04b] <==
	* I0731 10:59:14.040883       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.040933       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.046461       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.046518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.053760       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.053814       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.054329       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.054437       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.063728       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.063777       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.068782       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.069327       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.076855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.076892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 10:59:14.079784       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 10:59:14.079826       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 10:59:15.055738       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 10:59:15.077550       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 10:59:15.136950       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0731 10:59:32.254965       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0731 10:59:32.254989       1 handler_proxy.go:100] no RequestInfo found in the context
	E0731 10:59:32.255024       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 10:59:32.255032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 11:00:23.736812       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.97.34.249]
	
	* 
	* ==> kube-controller-manager [2f6cbaed3ff8c96e214ac545891ccc5f33193ea18668513625e803f703ef8c61] <==
	* E0731 10:59:34.149612       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:59:34.328154       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:59:34.328184       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:59:35.313492       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:59:35.313519       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 10:59:37.480971       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0731 10:59:37.481008       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 10:59:37.939243       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0731 10:59:37.939292       1 shared_informer.go:318] Caches are synced for garbage collector
	W0731 10:59:48.336116       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:59:48.336144       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:59:55.766021       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:59:55.766050       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 10:59:58.715081       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 10:59:58.715116       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:00:07.499628       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:00:07.499657       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:00:21.417948       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:00:21.417985       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 11:00:23.149953       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 11:00:23.149981       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 11:00:23.584665       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0731 11:00:23.595195       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-75lsc"
	I0731 11:00:25.555471       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0731 11:00:25.559647       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [6b0d630fb837d936bdc8696b653abbcebae0a02827d097ae002c19a5f5d5fe16] <==
	* I0731 10:56:40.632236       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0731 10:56:40.632472       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0731 10:56:40.632515       1 server_others.go:554] "Using iptables proxy"
	I0731 10:56:41.242966       1 server_others.go:192] "Using iptables Proxier"
	I0731 10:56:41.243057       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 10:56:41.243104       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 10:56:41.243143       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 10:56:41.243191       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 10:56:41.243806       1 server.go:658] "Version info" version="v1.27.3"
	I0731 10:56:41.244228       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 10:56:41.245174       1 config.go:188] "Starting service config controller"
	I0731 10:56:41.245745       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 10:56:41.245326       1 config.go:97] "Starting endpoint slice config controller"
	I0731 10:56:41.343439       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 10:56:41.344325       1 config.go:315] "Starting node config controller"
	I0731 10:56:41.345439       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 10:56:41.452303       1 shared_informer.go:318] Caches are synced for node config
	I0731 10:56:41.452808       1 shared_informer.go:318] Caches are synced for service config
	I0731 10:56:41.452828       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0020f09ae7def508fc03590fc4587ddd2decccc34c120ff5f2cfe984ca058b40] <==
	* W0731 10:56:21.132160       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 10:56:21.132189       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 10:56:21.132296       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 10:56:21.132353       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 10:56:21.132399       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 10:56:21.132430       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 10:56:21.132314       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 10:56:21.132451       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 10:56:21.132975       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 10:56:21.133009       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 10:56:21.133871       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 10:56:21.133952       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 10:56:21.954671       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 10:56:21.954699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 10:56:22.028123       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 10:56:22.028153       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 10:56:22.057924       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 10:56:22.057955       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 10:56:22.140130       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 10:56:22.140158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 10:56:22.151596       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 10:56:22.151654       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 10:56:22.195251       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 10:56:22.195291       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 10:56:25.352791       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 11:00:24 addons-650980 kubelet[1562]: W0731 11:00:24.256512    1562 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8dadec99eb045b20e27b077e498b614d65fce87f61a30f3d4813a9e945398158/crio-a0c816cf13c843a82c6e8e435050a18240ee80b7e96ffad6f37abd822f933938 WatchSource:0}: Error finding container a0c816cf13c843a82c6e8e435050a18240ee80b7e96ffad6f37abd822f933938: Status 404 returned error can't find the container with id a0c816cf13c843a82c6e8e435050a18240ee80b7e96ffad6f37abd822f933938
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.737930    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rw85\" (UniqueName: \"kubernetes.io/projected/bf7a826a-362b-4dc4-baab-fb1d9c417b91-kube-api-access-5rw85\") pod \"bf7a826a-362b-4dc4-baab-fb1d9c417b91\" (UID: \"bf7a826a-362b-4dc4-baab-fb1d9c417b91\") "
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.739644    1562 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7a826a-362b-4dc4-baab-fb1d9c417b91-kube-api-access-5rw85" (OuterVolumeSpecName: "kube-api-access-5rw85") pod "bf7a826a-362b-4dc4-baab-fb1d9c417b91" (UID: "bf7a826a-362b-4dc4-baab-fb1d9c417b91"). InnerVolumeSpecName "kube-api-access-5rw85". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.838705    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5rw85\" (UniqueName: \"kubernetes.io/projected/bf7a826a-362b-4dc4-baab-fb1d9c417b91-kube-api-access-5rw85\") on node \"addons-650980\" DevicePath \"\""
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.936192    1562 scope.go:115] "RemoveContainer" containerID="fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c"
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.954056    1562 scope.go:115] "RemoveContainer" containerID="fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c"
	Jul 31 11:00:24 addons-650980 kubelet[1562]: E0731 11:00:24.954449    1562 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c\": container with ID starting with fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c not found: ID does not exist" containerID="fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c"
	Jul 31 11:00:24 addons-650980 kubelet[1562]: I0731 11:00:24.954499    1562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c} err="failed to get container status \"fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c\": rpc error: code = NotFound desc = could not find container \"fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c\": container with ID starting with fe4bc493e1d756717580c5bc1e7654673d2981c5ed02c24e0ef3026cd23fe00c not found: ID does not exist"
	Jul 31 11:00:25 addons-650980 kubelet[1562]: E0731 11:00:25.565606    1562 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-p2xzl.1776eec1a38524b2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-p2xzl", UID:"26e46fbb-3d61-4239-bf94-10b36baf3578", APIVersion:"v1", ResourceVersion:"742", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-650980"}, FirstTimestamp:time.Date(2023, time.July, 31, 11, 0, 25, 563710642, time.Local), LastTimestamp:time.Date(2023, time.July, 31, 11, 0, 25, 563710642, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-p2xzl.1776eec1a38524b2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 11:00:25 addons-650980 kubelet[1562]: I0731 11:00:25.869398    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3fc292ef-ccf9-4c09-8d31-92d0b6ebd454 path="/var/lib/kubelet/pods/3fc292ef-ccf9-4c09-8d31-92d0b6ebd454/volumes"
	Jul 31 11:00:25 addons-650980 kubelet[1562]: I0731 11:00:25.869738    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a1a19ef2-89b9-41c9-b105-89c1468003da path="/var/lib/kubelet/pods/a1a19ef2-89b9-41c9-b105-89c1468003da/volumes"
	Jul 31 11:00:25 addons-650980 kubelet[1562]: I0731 11:00:25.870006    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=bf7a826a-362b-4dc4-baab-fb1d9c417b91 path="/var/lib/kubelet/pods/bf7a826a-362b-4dc4-baab-fb1d9c417b91/volumes"
	Jul 31 11:00:25 addons-650980 kubelet[1562]: I0731 11:00:25.948253    1562 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-75lsc" podStartSLOduration=1.852229605 podCreationTimestamp="2023-07-31 11:00:23 +0000 UTC" firstStartedPulling="2023-07-31 11:00:24.259663394 +0000 UTC m=+240.481380685" lastFinishedPulling="2023-07-31 11:00:25.355634818 +0000 UTC m=+241.577352108" observedRunningTime="2023-07-31 11:00:25.948061517 +0000 UTC m=+242.169778817" watchObservedRunningTime="2023-07-31 11:00:25.948201028 +0000 UTC m=+242.169918325"
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.942939    1562 scope.go:115] "RemoveContainer" containerID="75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4"
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.953203    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrsbn\" (UniqueName: \"kubernetes.io/projected/26e46fbb-3d61-4239-bf94-10b36baf3578-kube-api-access-nrsbn\") pod \"26e46fbb-3d61-4239-bf94-10b36baf3578\" (UID: \"26e46fbb-3d61-4239-bf94-10b36baf3578\") "
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.953249    1562 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26e46fbb-3d61-4239-bf94-10b36baf3578-webhook-cert\") pod \"26e46fbb-3d61-4239-bf94-10b36baf3578\" (UID: \"26e46fbb-3d61-4239-bf94-10b36baf3578\") "
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.955670    1562 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e46fbb-3d61-4239-bf94-10b36baf3578-kube-api-access-nrsbn" (OuterVolumeSpecName: "kube-api-access-nrsbn") pod "26e46fbb-3d61-4239-bf94-10b36baf3578" (UID: "26e46fbb-3d61-4239-bf94-10b36baf3578"). InnerVolumeSpecName "kube-api-access-nrsbn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.955959    1562 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e46fbb-3d61-4239-bf94-10b36baf3578-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "26e46fbb-3d61-4239-bf94-10b36baf3578" (UID: "26e46fbb-3d61-4239-bf94-10b36baf3578"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.959671    1562 scope.go:115] "RemoveContainer" containerID="75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4"
	Jul 31 11:00:26 addons-650980 kubelet[1562]: E0731 11:00:26.959996    1562 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4\": container with ID starting with 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4 not found: ID does not exist" containerID="75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4"
	Jul 31 11:00:26 addons-650980 kubelet[1562]: I0731 11:00:26.960031    1562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4} err="failed to get container status \"75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4\": rpc error: code = NotFound desc = could not find container \"75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4\": container with ID starting with 75f337e8d81eac139a87e1aca1e8b2386fbc5271db27c61cdd2397b4142a23d4 not found: ID does not exist"
	Jul 31 11:00:27 addons-650980 kubelet[1562]: I0731 11:00:27.054523    1562 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nrsbn\" (UniqueName: \"kubernetes.io/projected/26e46fbb-3d61-4239-bf94-10b36baf3578-kube-api-access-nrsbn\") on node \"addons-650980\" DevicePath \"\""
	Jul 31 11:00:27 addons-650980 kubelet[1562]: I0731 11:00:27.054558    1562 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26e46fbb-3d61-4239-bf94-10b36baf3578-webhook-cert\") on node \"addons-650980\" DevicePath \"\""
	Jul 31 11:00:27 addons-650980 kubelet[1562]: I0731 11:00:27.869537    1562 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=26e46fbb-3d61-4239-bf94-10b36baf3578 path="/var/lib/kubelet/pods/26e46fbb-3d61-4239-bf94-10b36baf3578/volumes"
	Jul 31 11:00:32 addons-650980 kubelet[1562]: W0731 11:00:32.160154    1562 container.go:586] Failed to update stats for container "/crio-885405d8f07f7b965a6983eeaaf3adefca646f48b508c8d30e26d7087324e485": unable to determine device info for dir: /var/lib/containers/storage/overlay/28d4a099990c0ff67eecbe6db252956aa26f81f90ef0d3878f23d0319f0ace1f/diff: stat failed on /var/lib/containers/storage/overlay/28d4a099990c0ff67eecbe6db252956aa26f81f90ef0d3878f23d0319f0ace1f/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [3005d3388b9ceb25afef30ded4cd50061d7a8241af7621e60dbc7da5a8ed476d] <==
	* I0731 10:57:11.663423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 10:57:11.673538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 10:57:11.673578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 10:57:11.680376       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 10:57:11.680488       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27f8e6d9-62c9-4c7f-8d28-566da373b82c", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-650980_189fcde4-ecf4-4573-8538-02cf60676b08 became leader
	I0731 10:57:11.680532       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-650980_189fcde4-ecf4-4573-8538-02cf60676b08!
	I0731 10:57:11.780825       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-650980_189fcde4-ecf4-4573-8538-02cf60676b08!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-650980 -n addons-650980
helpers_test.go:261: (dbg) Run:  kubectl --context addons-650980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdspecific-port348832455/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.336145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (311.639745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.620282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.016944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.872874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.124034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.236187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 11.482313968s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (400.451665ms)

                                                
                                                
-- stdout --
	total 8
	drwxr-xr-x 2 root root 4096 Jul 31 11:03 .
	drwxr-xr-x 1 root root 4096 Jul 31 11:03 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-amd64 -p functional-671868 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "sudo umount -f /mount-9p": exit status 1 (303.502189ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-671868 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdspecific-port348832455/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdspecific-port348832455/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdspecific-port348832455/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0731 11:04:05.209813   50679 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:05.209967   50679 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:05.209978   50679 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:05.209985   50679 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:05.210307   50679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:05.210632   50679 mustload.go:65] Loading cluster: functional-671868
I0731 11:04:05.211077   50679 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:05.211634   50679 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:05.230246   50679 host.go:66] Checking if "functional-671868" exists ...
I0731 11:04:05.230573   50679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0731 11:04:05.330857   50679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-31 11:04:05.320207438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0731 11:04:05.331040   50679 cli_runner.go:164] Run: docker network inspect functional-671868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0731 11:04:05.356382   50679 out.go:177] 
W0731 11:04:05.357899   50679 out.go:239] X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
W0731 11:04:05.357912   50679 out.go:239] * 
* 
W0731 11:04:05.359870   50679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_13bbe2a812ea877786eb4c08ada8f290d06ddc5d_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_13bbe2a812ea877786eb4c08ada8f290d06ddc5d_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 11:04:05.361567   50679 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (12.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-033299 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-033299 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.767358751s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-033299 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-033299 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fdb79858-aefb-4eec-990f-e77fcc01bd2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fdb79858-aefb-4eec-990f-e77fcc01bd2b] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.009581621s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0731 11:07:48.723987   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:08:16.406594   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-033299 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.90506076s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-033299 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.011041256s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons disable ingress-dns --alsologtostderr -v=1: (2.097756098s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons disable ingress --alsologtostderr -v=1
E0731 11:08:46.148024   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.153288   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.163623   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.183943   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.224263   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.304561   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.464991   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:46.785548   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:47.426440   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:08:48.706913   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons disable ingress --alsologtostderr -v=1: (7.383870325s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-033299
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-033299:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890",
	        "Created": "2023-07-31T11:04:46.59519744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 55732,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:04:46.873879868Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890/hostname",
	        "HostsPath": "/var/lib/docker/containers/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890/hosts",
	        "LogPath": "/var/lib/docker/containers/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890-json.log",
	        "Name": "/ingress-addon-legacy-033299",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-033299:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-033299",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/75bb9ae806fac45d2c29fed9088df54c07854fa3603455f682505a23a9170a44-init/diff:/var/lib/docker/overlay2/024d10bc12a315dda5382be7dcc437728fbe4eb773f76ea4124e9f17d757e8de/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75bb9ae806fac45d2c29fed9088df54c07854fa3603455f682505a23a9170a44/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75bb9ae806fac45d2c29fed9088df54c07854fa3603455f682505a23a9170a44/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75bb9ae806fac45d2c29fed9088df54c07854fa3603455f682505a23a9170a44/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-033299",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-033299/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-033299",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-033299",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-033299",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb77f12a76ec08babae1544055c8f13dc09716b642d81dda5cd6adee16799f93",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb77f12a76ec",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-033299": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7733f8361d74",
	                        "ingress-addon-legacy-033299"
	                    ],
	                    "NetworkID": "84679b1483a531eaa339fc6a74257e165830437a216c30b778b8689ff50721d0",
	                    "EndpointID": "3af76f57931c79f34d097b6facb14594ed59232aa3716f28100b9f10da9edee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-033299 -n ingress-addon-legacy-033299
E0731 11:08:51.267955   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-033299 logs -n 25: (1.0374016s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-671868 image save                                                 | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-671868                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-671868 ssh findmnt                                                | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | -T /mount1                                                                   |                             |         |         |                     |                     |
	| ssh     | functional-671868 ssh findmnt                                                | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | -T /mount2                                                                   |                             |         |         |                     |                     |
	| ssh     | functional-671868 ssh findmnt                                                | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | -T /mount3                                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-671868                                                         | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC |                     |
	|         | --kill=true                                                                  |                             |         |         |                     |                     |
	| image   | functional-671868 image rm                                                   | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-671868                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868 image ls                                                   | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	| image   | functional-671868 image load                                                 | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868 image ls                                                   | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	| image   | functional-671868 image save --daemon                                        | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-671868                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-671868 ssh pgrep                                                  | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-671868                                                            | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868                                                            | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868                                                            | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868                                                            | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC |                     |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-671868 image build -t                                             | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	|         | localhost/my-image:functional-671868                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-671868 image ls                                                   | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	| delete  | -p functional-671868                                                         | functional-671868           | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:04 UTC |
	| start   | -p ingress-addon-legacy-033299                                               | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:04 UTC | 31 Jul 23 11:05 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-033299                                                  | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:05 UTC | 31 Jul 23 11:05 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-033299                                                  | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:05 UTC | 31 Jul 23 11:05 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-033299                                                  | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:06 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-033299 ip                                               | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC | 31 Jul 23 11:08 UTC |
	| addons  | ingress-addon-legacy-033299                                                  | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC | 31 Jul 23 11:08 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-033299                                                  | ingress-addon-legacy-033299 | jenkins | v1.31.1 | 31 Jul 23 11:08 UTC | 31 Jul 23 11:08 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:04:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:04:35.601194   55115 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:04:35.601313   55115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:35.601323   55115 out.go:309] Setting ErrFile to fd 2...
	I0731 11:04:35.601330   55115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:35.601544   55115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:04:35.602120   55115 out.go:303] Setting JSON to false
	I0731 11:04:35.603340   55115 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2827,"bootTime":1690798649,"procs":499,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:04:35.603401   55115 start.go:138] virtualization: kvm guest
	I0731 11:04:35.605392   55115 out.go:177] * [ingress-addon-legacy-033299] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:04:35.607063   55115 notify.go:220] Checking for updates...
	I0731 11:04:35.607073   55115 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:04:35.608444   55115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:04:35.609659   55115 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:04:35.610847   55115 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:04:35.612418   55115 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:04:35.613890   55115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:04:35.615219   55115 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:04:35.635390   55115 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:04:35.635488   55115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:04:35.685555   55115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-31 11:04:35.676970709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:04:35.685660   55115 docker.go:294] overlay module found
	I0731 11:04:35.687287   55115 out.go:177] * Using the docker driver based on user configuration
	I0731 11:04:35.688599   55115 start.go:298] selected driver: docker
	I0731 11:04:35.688614   55115 start.go:898] validating driver "docker" against <nil>
	I0731 11:04:35.688623   55115 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:04:35.689306   55115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:04:35.738671   55115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-31 11:04:35.730595311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:04:35.738830   55115 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 11:04:35.739018   55115 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:04:35.740873   55115 out.go:177] * Using Docker driver with root privileges
	I0731 11:04:35.742303   55115 cni.go:84] Creating CNI manager for ""
	I0731 11:04:35.742319   55115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:04:35.742330   55115 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:04:35.742340   55115 start_flags.go:319] config:
	{Name:ingress-addon-legacy-033299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-033299 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:04:35.744121   55115 out.go:177] * Starting control plane node ingress-addon-legacy-033299 in cluster ingress-addon-legacy-033299
	I0731 11:04:35.745376   55115 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:04:35.746663   55115 out.go:177] * Pulling base image ...
	I0731 11:04:35.747979   55115 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:04:35.748004   55115 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:04:35.764050   55115 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:04:35.764077   55115 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 11:04:35.773266   55115 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0731 11:04:35.773286   55115 cache.go:57] Caching tarball of preloaded images
	I0731 11:04:35.773417   55115 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:04:35.775168   55115 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0731 11:04:35.776605   55115 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0731 11:04:35.805498   55115 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0731 11:04:38.359654   55115 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0731 11:04:38.359760   55115 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0731 11:04:39.315546   55115 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0731 11:04:39.315923   55115 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/config.json ...
	I0731 11:04:39.315957   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/config.json: {Name:mk3769a300b4fb0193481915bfed603abefe53a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:39.316131   55115 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:04:39.316151   55115 start.go:365] acquiring machines lock for ingress-addon-legacy-033299: {Name:mk919d0e5e3ecdca89bddbedc66675f160b4458a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:04:39.316194   55115 start.go:369] acquired machines lock for "ingress-addon-legacy-033299" in 34.293µs
	I0731 11:04:39.316213   55115 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-033299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-033299 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:04:39.316270   55115 start.go:125] createHost starting for "" (driver="docker")
	I0731 11:04:39.318425   55115 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0731 11:04:39.318708   55115 start.go:159] libmachine.API.Create for "ingress-addon-legacy-033299" (driver="docker")
	I0731 11:04:39.318738   55115 client.go:168] LocalClient.Create starting
	I0731 11:04:39.318795   55115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem
	I0731 11:04:39.318823   55115 main.go:141] libmachine: Decoding PEM data...
	I0731 11:04:39.318839   55115 main.go:141] libmachine: Parsing certificate...
	I0731 11:04:39.318910   55115 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem
	I0731 11:04:39.318933   55115 main.go:141] libmachine: Decoding PEM data...
	I0731 11:04:39.318944   55115 main.go:141] libmachine: Parsing certificate...
	I0731 11:04:39.319205   55115 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-033299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:04:39.334971   55115 cli_runner.go:211] docker network inspect ingress-addon-legacy-033299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:04:39.335034   55115 network_create.go:281] running [docker network inspect ingress-addon-legacy-033299] to gather additional debugging logs...
	I0731 11:04:39.335051   55115 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-033299
	W0731 11:04:39.350134   55115 cli_runner.go:211] docker network inspect ingress-addon-legacy-033299 returned with exit code 1
	I0731 11:04:39.350161   55115 network_create.go:284] error running [docker network inspect ingress-addon-legacy-033299]: docker network inspect ingress-addon-legacy-033299: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-033299 not found
	I0731 11:04:39.350174   55115 network_create.go:286] output of [docker network inspect ingress-addon-legacy-033299]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-033299 not found
	
	** /stderr **
	I0731 11:04:39.350214   55115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:04:39.365512   55115 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dc04d0}
	I0731 11:04:39.365561   55115 network_create.go:123] attempt to create docker network ingress-addon-legacy-033299 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0731 11:04:39.365620   55115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-033299 ingress-addon-legacy-033299
	I0731 11:04:39.415871   55115 network_create.go:107] docker network ingress-addon-legacy-033299 192.168.49.0/24 created
	I0731 11:04:39.415912   55115 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-033299" container
	I0731 11:04:39.415985   55115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:04:39.430862   55115 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-033299 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-033299 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:04:39.447068   55115 oci.go:103] Successfully created a docker volume ingress-addon-legacy-033299
	I0731 11:04:39.447135   55115 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-033299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-033299 --entrypoint /usr/bin/test -v ingress-addon-legacy-033299:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 11:04:41.208184   55115 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-033299-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-033299 --entrypoint /usr/bin/test -v ingress-addon-legacy-033299:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.761008172s)
	I0731 11:04:41.208213   55115 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-033299
	I0731 11:04:41.208242   55115 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:04:41.208265   55115 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:04:41.208336   55115 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-033299:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:04:46.531699   55115 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-033299:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.323306599s)
	I0731 11:04:46.531731   55115 kic.go:199] duration metric: took 5.323462 seconds to extract preloaded images to volume
	W0731 11:04:46.531842   55115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:04:46.531949   55115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:04:46.580767   55115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-033299 --name ingress-addon-legacy-033299 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-033299 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-033299 --network ingress-addon-legacy-033299 --ip 192.168.49.2 --volume ingress-addon-legacy-033299:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:04:46.881558   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Running}}
	I0731 11:04:46.898537   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:04:46.916712   55115 cli_runner.go:164] Run: docker exec ingress-addon-legacy-033299 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:04:46.978033   55115 oci.go:144] the created container "ingress-addon-legacy-033299" has a running status.
	I0731 11:04:46.978060   55115 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa...
	I0731 11:04:47.098556   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 11:04:47.098600   55115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:04:47.117398   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:04:47.133318   55115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:04:47.133342   55115 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-033299 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:04:47.200590   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:04:47.217979   55115 machine.go:88] provisioning docker machine ...
	I0731 11:04:47.218013   55115 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-033299"
	I0731 11:04:47.218077   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:47.240349   55115 main.go:141] libmachine: Using SSH client type: native
	I0731 11:04:47.241047   55115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0731 11:04:47.241075   55115 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-033299 && echo "ingress-addon-legacy-033299" | sudo tee /etc/hostname
	I0731 11:04:47.241737   55115 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43360->127.0.0.1:32787: read: connection reset by peer
	I0731 11:04:50.377757   55115 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-033299
	
	I0731 11:04:50.377841   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:50.393648   55115 main.go:141] libmachine: Using SSH client type: native
	I0731 11:04:50.394036   55115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0731 11:04:50.394054   55115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-033299' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-033299/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-033299' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:04:50.515915   55115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:04:50.515945   55115 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 11:04:50.515968   55115 ubuntu.go:177] setting up certificates
	I0731 11:04:50.515979   55115 provision.go:83] configureAuth start
	I0731 11:04:50.516036   55115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-033299
	I0731 11:04:50.532418   55115 provision.go:138] copyHostCerts
	I0731 11:04:50.532461   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:04:50.532490   55115 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem, removing ...
	I0731 11:04:50.532498   55115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:04:50.532558   55115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 11:04:50.532624   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:04:50.532653   55115 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem, removing ...
	I0731 11:04:50.532659   55115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:04:50.532681   55115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 11:04:50.532721   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:04:50.532736   55115 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem, removing ...
	I0731 11:04:50.532742   55115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:04:50.532765   55115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 11:04:50.532807   55115 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-033299 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-033299]
	I0731 11:04:51.037356   55115 provision.go:172] copyRemoteCerts
	I0731 11:04:51.037412   55115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:04:51.037449   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.053795   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:04:51.144066   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 11:04:51.144133   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:04:51.165364   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 11:04:51.165415   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0731 11:04:51.185962   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 11:04:51.186043   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 11:04:51.206600   55115 provision.go:86] duration metric: configureAuth took 690.605556ms
	I0731 11:04:51.206633   55115 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:04:51.206782   55115 config.go:182] Loaded profile config "ingress-addon-legacy-033299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0731 11:04:51.206898   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.223237   55115 main.go:141] libmachine: Using SSH client type: native
	I0731 11:04:51.223645   55115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0731 11:04:51.223664   55115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:04:51.451612   55115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:04:51.451632   55115 machine.go:91] provisioned docker machine in 4.233632413s
	I0731 11:04:51.451640   55115 client.go:171] LocalClient.Create took 12.13289718s
	I0731 11:04:51.451658   55115 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-033299" took 12.132950968s
	I0731 11:04:51.451664   55115 start.go:300] post-start starting for "ingress-addon-legacy-033299" (driver="docker")
	I0731 11:04:51.451672   55115 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:04:51.451717   55115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:04:51.451749   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.467991   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:04:51.556735   55115 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:04:51.559566   55115 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:04:51.559610   55115 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:04:51.559625   55115 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:04:51.559636   55115 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 11:04:51.559652   55115 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 11:04:51.559706   55115 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 11:04:51.559769   55115 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> 156462.pem in /etc/ssl/certs
	I0731 11:04:51.559779   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /etc/ssl/certs/156462.pem
	I0731 11:04:51.559859   55115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:04:51.567234   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:04:51.588369   55115 start.go:303] post-start completed in 136.694176ms
	I0731 11:04:51.588725   55115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-033299
	I0731 11:04:51.605083   55115 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/config.json ...
	I0731 11:04:51.605360   55115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:04:51.605406   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.620596   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:04:51.708369   55115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:04:51.712169   55115 start.go:128] duration metric: createHost completed in 12.395882648s
	I0731 11:04:51.712191   55115 start.go:83] releasing machines lock for "ingress-addon-legacy-033299", held for 12.395986018s
	I0731 11:04:51.712285   55115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-033299
	I0731 11:04:51.727898   55115 ssh_runner.go:195] Run: cat /version.json
	I0731 11:04:51.727953   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.727990   55115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:04:51.728050   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:04:51.744064   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:04:51.744281   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:04:51.917177   55115 ssh_runner.go:195] Run: systemctl --version
	I0731 11:04:51.921266   55115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:04:52.055842   55115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:04:52.059965   55115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:04:52.077268   55115 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:04:52.077367   55115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:04:52.103193   55115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 11:04:52.103212   55115 start.go:466] detecting cgroup driver to use...
	I0731 11:04:52.103240   55115 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:04:52.103284   55115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:04:52.115751   55115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:04:52.124978   55115 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:04:52.125030   55115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:04:52.136355   55115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:04:52.148199   55115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 11:04:52.220309   55115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:04:52.296335   55115 docker.go:212] disabling docker service ...
	I0731 11:04:52.296406   55115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:04:52.312570   55115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:04:52.322337   55115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:04:52.401514   55115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:04:52.481870   55115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:04:52.491876   55115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:04:52.505874   55115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 11:04:52.505925   55115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:04:52.514494   55115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 11:04:52.514566   55115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:04:52.523124   55115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:04:52.531467   55115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:04:52.539914   55115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:04:52.548102   55115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:04:52.555908   55115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:04:52.563302   55115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:04:52.632625   55115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 11:04:52.736003   55115 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 11:04:52.736076   55115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 11:04:52.739207   55115 start.go:534] Will wait 60s for crictl version
	I0731 11:04:52.739254   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:52.742344   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:04:52.774989   55115 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 11:04:52.775078   55115 ssh_runner.go:195] Run: crio --version
	I0731 11:04:52.806920   55115 ssh_runner.go:195] Run: crio --version
	I0731 11:04:52.840928   55115 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0731 11:04:52.842584   55115 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-033299 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:04:52.857747   55115 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0731 11:04:52.861237   55115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:04:52.871114   55115 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0731 11:04:52.871161   55115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:04:52.913900   55115 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0731 11:04:52.913965   55115 ssh_runner.go:195] Run: which lz4
	I0731 11:04:52.917137   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 11:04:52.917234   55115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 11:04:52.920109   55115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 11:04:52.920131   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0731 11:04:53.811234   55115 crio.go:444] Took 0.894036 seconds to copy over tarball
	I0731 11:04:53.811313   55115 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 11:04:56.067516   55115 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.25617536s)
	I0731 11:04:56.067553   55115 crio.go:451] Took 2.256298 seconds to extract the tarball
	I0731 11:04:56.067565   55115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 11:04:56.136922   55115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:04:56.168034   55115 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0731 11:04:56.168055   55115 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 11:04:56.168125   55115 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:04:56.168134   55115 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:04:56.168149   55115 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:04:56.168175   55115 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0731 11:04:56.168207   55115 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 11:04:56.168320   55115 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:04:56.168329   55115 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:04:56.168352   55115 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:04:56.169458   55115 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:04:56.169477   55115 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0731 11:04:56.169489   55115 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:04:56.169458   55115 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:04:56.169459   55115 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:04:56.169465   55115 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:04:56.169466   55115 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:04:56.169466   55115 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 11:04:56.349682   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:04:56.352519   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0731 11:04:56.355295   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:04:56.361131   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:04:56.365646   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 11:04:56.370679   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0731 11:04:56.374823   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:04:56.446424   55115 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0731 11:04:56.446468   55115 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:04:56.446504   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.446542   55115 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0731 11:04:56.446585   55115 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0731 11:04:56.446637   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.446643   55115 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0731 11:04:56.446724   55115 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:04:56.446779   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.451787   55115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:04:56.454634   55115 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0731 11:04:56.454670   55115 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:04:56.454704   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.460176   55115 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 11:04:56.460208   55115 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 11:04:56.460242   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.534581   55115 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0731 11:04:56.534627   55115 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0731 11:04:56.534673   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.537629   55115 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0731 11:04:56.537646   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0731 11:04:56.537667   55115 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:04:56.537716   55115 ssh_runner.go:195] Run: which crictl
	I0731 11:04:56.537731   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0731 11:04:56.537764   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0731 11:04:56.660173   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0731 11:04:56.660228   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0731 11:04:56.660181   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 11:04:56.660295   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0731 11:04:56.660362   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0731 11:04:56.660449   55115 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0731 11:04:56.660451   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0731 11:04:56.736150   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0731 11:04:56.736217   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 11:04:56.736245   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0731 11:04:56.736267   55115 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0731 11:04:56.736298   55115 cache_images.go:92] LoadImages completed in 568.220736ms
	W0731 11:04:56.736360   55115 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0731 11:04:56.736412   55115 ssh_runner.go:195] Run: crio config
	I0731 11:04:56.776036   55115 cni.go:84] Creating CNI manager for ""
	I0731 11:04:56.776058   55115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:04:56.776067   55115 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:04:56.776088   55115 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-033299 NodeName:ingress-addon-legacy-033299 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 11:04:56.776253   55115 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-033299"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 11:04:56.776357   55115 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-033299 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-033299 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:04:56.776424   55115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0731 11:04:56.784216   55115 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:04:56.784276   55115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 11:04:56.791653   55115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0731 11:04:56.806850   55115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0731 11:04:56.821884   55115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0731 11:04:56.836968   55115 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:04:56.839791   55115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:04:56.848727   55115 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299 for IP: 192.168.49.2
	I0731 11:04:56.848801   55115 certs.go:190] acquiring lock for shared ca certs: {Name:mkc3a3f248dbae88fa439f539f826d6e08b37eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:56.848937   55115 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key
	I0731 11:04:56.848977   55115 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key
	I0731 11:04:56.849012   55115 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key
	I0731 11:04:56.849030   55115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt with IP's: []
	I0731 11:04:57.027640   55115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt ...
	I0731 11:04:57.027681   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: {Name:mk4fc5cb02e8c3e3ce6c8ede6ea985bc40aa7de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.027858   55115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key ...
	I0731 11:04:57.027869   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key: {Name:mkc28e069d47becff0c791c447d443185b9a5266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.027977   55115 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key.dd3b5fb2
	I0731 11:04:57.027994   55115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 11:04:57.105394   55115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt.dd3b5fb2 ...
	I0731 11:04:57.105427   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt.dd3b5fb2: {Name:mke53b99e5d58a959895ed27da8d03fa2c36b468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.105603   55115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key.dd3b5fb2 ...
	I0731 11:04:57.105617   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key.dd3b5fb2: {Name:mk9cfb70009a379d8280f5fe001cf3fcc666bb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.105686   55115 certs.go:337] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt
	I0731 11:04:57.105787   55115 certs.go:341] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key
	I0731 11:04:57.105846   55115 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.key
	I0731 11:04:57.105865   55115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.crt with IP's: []
	I0731 11:04:57.357038   55115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.crt ...
	I0731 11:04:57.357072   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.crt: {Name:mk262a0de960f87d60eae562704b0e6b48def7a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.357234   55115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.key ...
	I0731 11:04:57.357246   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.key: {Name:mk710f4e493ee5e9125778ba1b8224f970611021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:04:57.357314   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 11:04:57.357340   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 11:04:57.357355   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 11:04:57.357372   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 11:04:57.357386   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 11:04:57.357398   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 11:04:57.357420   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 11:04:57.357434   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 11:04:57.357487   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem (1338 bytes)
	W0731 11:04:57.357554   55115 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646_empty.pem, impossibly tiny 0 bytes
	I0731 11:04:57.357569   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 11:04:57.357598   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:04:57.357631   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:04:57.357659   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem (1675 bytes)
	I0731 11:04:57.357704   55115 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:04:57.357757   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem -> /usr/share/ca-certificates/15646.pem
	I0731 11:04:57.357773   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /usr/share/ca-certificates/156462.pem
	I0731 11:04:57.357787   55115 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:04:57.358442   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 11:04:57.379869   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 11:04:57.399994   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 11:04:57.420284   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 11:04:57.440668   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:04:57.460395   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 11:04:57.480228   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:04:57.500912   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 11:04:57.521274   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem --> /usr/share/ca-certificates/15646.pem (1338 bytes)
	I0731 11:04:57.541203   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /usr/share/ca-certificates/156462.pem (1708 bytes)
	I0731 11:04:57.560966   55115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:04:57.581508   55115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 11:04:57.596627   55115 ssh_runner.go:195] Run: openssl version
	I0731 11:04:57.601615   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15646.pem && ln -fs /usr/share/ca-certificates/15646.pem /etc/ssl/certs/15646.pem"
	I0731 11:04:57.609792   55115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15646.pem
	I0731 11:04:57.612939   55115 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:01 /usr/share/ca-certificates/15646.pem
	I0731 11:04:57.612989   55115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15646.pem
	I0731 11:04:57.618990   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15646.pem /etc/ssl/certs/51391683.0"
	I0731 11:04:57.627476   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156462.pem && ln -fs /usr/share/ca-certificates/156462.pem /etc/ssl/certs/156462.pem"
	I0731 11:04:57.635703   55115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156462.pem
	I0731 11:04:57.638842   55115 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:01 /usr/share/ca-certificates/156462.pem
	I0731 11:04:57.638885   55115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156462.pem
	I0731 11:04:57.645114   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156462.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 11:04:57.653129   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:04:57.661156   55115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:04:57.664173   55115 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:04:57.664208   55115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:04:57.670016   55115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:04:57.677908   55115 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 11:04:57.680780   55115 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:04:57.680840   55115 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-033299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-033299 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:04:57.680917   55115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 11:04:57.680948   55115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 11:04:57.713776   55115 cri.go:89] found id: ""
	I0731 11:04:57.713841   55115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 11:04:57.721850   55115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 11:04:57.729701   55115 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 11:04:57.729748   55115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 11:04:57.737175   55115 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:04:57.737221   55115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 11:04:57.778273   55115 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0731 11:04:57.778378   55115 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 11:04:57.815517   55115 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:04:57.815599   55115 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0731 11:04:57.815631   55115 kubeadm.go:322] OS: Linux
	I0731 11:04:57.815693   55115 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 11:04:57.815736   55115 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 11:04:57.815817   55115 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 11:04:57.815873   55115 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 11:04:57.815944   55115 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 11:04:57.816001   55115 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 11:04:57.882721   55115 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 11:04:57.882828   55115 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 11:04:57.882911   55115 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 11:04:58.058915   55115 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:04:58.059756   55115 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:04:58.059847   55115 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 11:04:58.131467   55115 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 11:04:58.135269   55115 out.go:204]   - Generating certificates and keys ...
	I0731 11:04:58.135398   55115 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 11:04:58.135504   55115 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 11:04:58.431029   55115 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 11:04:58.790390   55115 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 11:04:59.031731   55115 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 11:04:59.275338   55115 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 11:04:59.320513   55115 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 11:04:59.320703   55115 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-033299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:04:59.550876   55115 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 11:04:59.551045   55115 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-033299 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0731 11:04:59.725987   55115 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 11:04:59.953084   55115 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 11:05:00.204455   55115 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 11:05:00.204560   55115 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 11:05:00.514492   55115 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 11:05:00.616172   55115 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 11:05:00.663569   55115 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 11:05:00.756494   55115 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 11:05:00.757307   55115 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 11:05:00.759381   55115 out.go:204]   - Booting up control plane ...
	I0731 11:05:00.759481   55115 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 11:05:00.765051   55115 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 11:05:00.766000   55115 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 11:05:00.766664   55115 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 11:05:00.768619   55115 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 11:05:07.770960   55115 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002231 seconds
	I0731 11:05:07.771166   55115 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 11:05:07.781723   55115 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 11:05:08.297101   55115 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 11:05:08.297351   55115 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-033299 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0731 11:05:08.805116   55115 kubeadm.go:322] [bootstrap-token] Using token: ofbq1s.m6l9tj22vzhu6rkf
	I0731 11:05:08.806765   55115 out.go:204]   - Configuring RBAC rules ...
	I0731 11:05:08.806927   55115 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 11:05:08.809808   55115 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 11:05:08.815740   55115 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 11:05:08.817727   55115 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 11:05:08.819533   55115 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 11:05:08.822202   55115 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 11:05:08.829279   55115 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 11:05:09.052012   55115 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 11:05:09.219367   55115 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 11:05:09.220562   55115 kubeadm.go:322] 
	I0731 11:05:09.220638   55115 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 11:05:09.220646   55115 kubeadm.go:322] 
	I0731 11:05:09.220712   55115 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 11:05:09.220719   55115 kubeadm.go:322] 
	I0731 11:05:09.220738   55115 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 11:05:09.220784   55115 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 11:05:09.220825   55115 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 11:05:09.220831   55115 kubeadm.go:322] 
	I0731 11:05:09.220873   55115 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 11:05:09.220937   55115 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 11:05:09.221043   55115 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 11:05:09.221075   55115 kubeadm.go:322] 
	I0731 11:05:09.221191   55115 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 11:05:09.221333   55115 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 11:05:09.221368   55115 kubeadm.go:322] 
	I0731 11:05:09.221483   55115 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ofbq1s.m6l9tj22vzhu6rkf \
	I0731 11:05:09.221597   55115 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd \
	I0731 11:05:09.221639   55115 kubeadm.go:322]     --control-plane 
	I0731 11:05:09.221649   55115 kubeadm.go:322] 
	I0731 11:05:09.221764   55115 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 11:05:09.221772   55115 kubeadm.go:322] 
	I0731 11:05:09.221890   55115 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ofbq1s.m6l9tj22vzhu6rkf \
	I0731 11:05:09.222061   55115 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd 
	I0731 11:05:09.223941   55115 kubeadm.go:322] W0731 11:04:57.777755    1370 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0731 11:05:09.224138   55115 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0731 11:05:09.224282   55115 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:05:09.224395   55115 kubeadm.go:322] W0731 11:05:00.764770    1370 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 11:05:09.224501   55115 kubeadm.go:322] W0731 11:05:00.765750    1370 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0731 11:05:09.224520   55115 cni.go:84] Creating CNI manager for ""
	I0731 11:05:09.224528   55115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 11:05:09.226368   55115 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 11:05:09.227857   55115 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 11:05:09.231531   55115 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0731 11:05:09.231547   55115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 11:05:09.247206   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 11:05:09.667145   55115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 11:05:09.667185   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:09.667208   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=ingress-addon-legacy-033299 minikube.k8s.io/updated_at=2023_07_31T11_05_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:09.674560   55115 ops.go:34] apiserver oom_adj: -16
	I0731 11:05:09.759771   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:09.843052   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:10.407482   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:10.906977   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:11.407187   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:11.906823   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:12.406838   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:12.907397   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:13.407355   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:13.907609   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:14.407671   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:14.907541   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:15.407165   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:15.907889   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:16.406964   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:16.907382   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:17.406914   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:17.906902   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:18.407378   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:18.906987   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:19.407200   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:19.907771   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:20.407619   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:20.907872   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:21.407870   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:21.906938   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:22.407033   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:22.906877   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:23.407665   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:23.907135   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:24.407197   55115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:05:24.474072   55115 kubeadm.go:1081] duration metric: took 14.806930054s to wait for elevateKubeSystemPrivileges.
	I0731 11:05:24.474115   55115 kubeadm.go:406] StartCluster complete in 26.79328068s
	I0731 11:05:24.474195   55115 settings.go:142] acquiring lock: {Name:mk56cd859b72e4589e0c5d99bc981c97b4dc2ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:05:24.474298   55115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:05:24.475257   55115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/kubeconfig: {Name:mk53977df3b191de084093522567bbafd77b3df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:05:24.475849   55115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 11:05:24.475931   55115 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 11:05:24.476040   55115 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-033299"
	I0731 11:05:24.476058   55115 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-033299"
	I0731 11:05:24.476063   55115 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-033299"
	I0731 11:05:24.476088   55115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-033299"
	I0731 11:05:24.476111   55115 host.go:66] Checking if "ingress-addon-legacy-033299" exists ...
	I0731 11:05:24.476150   55115 config.go:182] Loaded profile config "ingress-addon-legacy-033299": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0731 11:05:24.476492   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:05:24.476562   55115 kapi.go:59] client config for ingress-addon-legacy-033299: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:05:24.476700   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:05:24.477468   55115 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 11:05:24.493411   55115 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-033299" context rescaled to 1 replicas
	I0731 11:05:24.493455   55115 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:05:24.495259   55115 out.go:177] * Verifying Kubernetes components...
	I0731 11:05:24.496575   55115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:05:24.500922   55115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:05:24.501633   55115 kapi.go:59] client config for ingress-addon-legacy-033299: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:05:24.502805   55115 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:05:24.502821   55115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 11:05:24.502865   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:05:24.506880   55115 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-033299"
	I0731 11:05:24.506918   55115 host.go:66] Checking if "ingress-addon-legacy-033299" exists ...
	I0731 11:05:24.507324   55115 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-033299 --format={{.State.Status}}
	I0731 11:05:24.524815   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:05:24.526253   55115 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 11:05:24.526275   55115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 11:05:24.526324   55115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-033299
	I0731 11:05:24.547474   55115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/ingress-addon-legacy-033299/id_rsa Username:docker}
	I0731 11:05:24.584678   55115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 11:05:24.585248   55115 kapi.go:59] client config for ingress-addon-legacy-033299: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:05:24.585514   55115 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-033299" to be "Ready" ...
	I0731 11:05:24.650149   55115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:05:24.653762   55115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 11:05:25.042784   55115 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0731 11:05:25.158770   55115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 11:05:25.160127   55115 addons.go:502] enable addons completed in 684.206713ms: enabled=[storage-provisioner default-storageclass]
	I0731 11:05:26.593967   55115 node_ready.go:58] node "ingress-addon-legacy-033299" has status "Ready":"False"
	I0731 11:05:29.094326   55115 node_ready.go:58] node "ingress-addon-legacy-033299" has status "Ready":"False"
	I0731 11:05:29.883185   55115 node_ready.go:49] node "ingress-addon-legacy-033299" has status "Ready":"True"
	I0731 11:05:29.883222   55115 node_ready.go:38] duration metric: took 5.297682456s waiting for node "ingress-addon-legacy-033299" to be "Ready" ...
	I0731 11:05:29.883235   55115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:05:29.891029   55115 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zv98f" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:32.142179   55115 pod_ready.go:102] pod "coredns-66bff467f8-zv98f" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-31 11:05:24 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0731 11:05:34.147236   55115 pod_ready.go:102] pod "coredns-66bff467f8-zv98f" in "kube-system" namespace has status "Ready":"False"
	I0731 11:05:36.645184   55115 pod_ready.go:102] pod "coredns-66bff467f8-zv98f" in "kube-system" namespace has status "Ready":"False"
	I0731 11:05:39.144324   55115 pod_ready.go:102] pod "coredns-66bff467f8-zv98f" in "kube-system" namespace has status "Ready":"False"
	I0731 11:05:41.145102   55115 pod_ready.go:92] pod "coredns-66bff467f8-zv98f" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.145131   55115 pod_ready.go:81] duration metric: took 11.254069701s waiting for pod "coredns-66bff467f8-zv98f" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.145146   55115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.150993   55115 pod_ready.go:92] pod "etcd-ingress-addon-legacy-033299" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.151019   55115 pod_ready.go:81] duration metric: took 5.866344ms waiting for pod "etcd-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.151036   55115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.156492   55115 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-033299" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.156514   55115 pod_ready.go:81] duration metric: took 5.470186ms waiting for pod "kube-apiserver-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.156525   55115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.161928   55115 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-033299" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.161957   55115 pod_ready.go:81] duration metric: took 5.42465ms waiting for pod "kube-controller-manager-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.161969   55115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5v8wj" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.166671   55115 pod_ready.go:92] pod "kube-proxy-5v8wj" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.166692   55115 pod_ready.go:81] duration metric: took 4.717079ms waiting for pod "kube-proxy-5v8wj" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.166704   55115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.341054   55115 request.go:628] Waited for 174.291766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-033299
	I0731 11:05:41.540963   55115 request.go:628] Waited for 197.379759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-033299
	I0731 11:05:41.543593   55115 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-033299" in "kube-system" namespace has status "Ready":"True"
	I0731 11:05:41.543615   55115 pod_ready.go:81] duration metric: took 376.90312ms waiting for pod "kube-scheduler-ingress-addon-legacy-033299" in "kube-system" namespace to be "Ready" ...
	I0731 11:05:41.543629   55115 pod_ready.go:38] duration metric: took 11.660367288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:05:41.543688   55115 api_server.go:52] waiting for apiserver process to appear ...
	I0731 11:05:41.543747   55115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 11:05:41.553948   55115 api_server.go:72] duration metric: took 17.06046277s to wait for apiserver process to appear ...
	I0731 11:05:41.553968   55115 api_server.go:88] waiting for apiserver healthz status ...
	I0731 11:05:41.553980   55115 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0731 11:05:41.559493   55115 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0731 11:05:41.560334   55115 api_server.go:141] control plane version: v1.18.20
	I0731 11:05:41.560356   55115 api_server.go:131] duration metric: took 6.382493ms to wait for apiserver health ...
	I0731 11:05:41.560363   55115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 11:05:41.740790   55115 request.go:628] Waited for 180.368995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:05:41.745922   55115 system_pods.go:59] 8 kube-system pods found
	I0731 11:05:41.745957   55115 system_pods.go:61] "coredns-66bff467f8-zv98f" [b8db6602-1e03-4287-b844-d96808c518d2] Running
	I0731 11:05:41.745963   55115 system_pods.go:61] "etcd-ingress-addon-legacy-033299" [6197a611-3d0e-4a4b-a9a5-411ec027542b] Running
	I0731 11:05:41.745970   55115 system_pods.go:61] "kindnet-55rs5" [38bc5cf0-51a4-4b98-81b4-ed0963ba4d7d] Running
	I0731 11:05:41.745975   55115 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-033299" [95ac6ade-dfd0-4bee-aa1f-ea97ab7d5e73] Running
	I0731 11:05:41.745979   55115 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-033299" [483b1059-b82b-4569-b0c4-1738b85f0655] Running
	I0731 11:05:41.745983   55115 system_pods.go:61] "kube-proxy-5v8wj" [339370bf-8c8b-4523-9719-f10232ffd5e7] Running
	I0731 11:05:41.745987   55115 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-033299" [32d0b987-59c1-420f-856b-98d7470789a5] Running
	I0731 11:05:41.745994   55115 system_pods.go:61] "storage-provisioner" [7992d366-98b6-4458-a5a5-35e1f7f60f9e] Running
	I0731 11:05:41.746000   55115 system_pods.go:74] duration metric: took 185.632741ms to wait for pod list to return data ...
	I0731 11:05:41.746011   55115 default_sa.go:34] waiting for default service account to be created ...
	I0731 11:05:41.940366   55115 request.go:628] Waited for 194.291746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 11:05:41.942770   55115 default_sa.go:45] found service account: "default"
	I0731 11:05:41.942794   55115 default_sa.go:55] duration metric: took 196.77818ms for default service account to be created ...
	I0731 11:05:41.942803   55115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 11:05:42.141087   55115 request.go:628] Waited for 198.19817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:05:42.146410   55115 system_pods.go:86] 8 kube-system pods found
	I0731 11:05:42.146438   55115 system_pods.go:89] "coredns-66bff467f8-zv98f" [b8db6602-1e03-4287-b844-d96808c518d2] Running
	I0731 11:05:42.146443   55115 system_pods.go:89] "etcd-ingress-addon-legacy-033299" [6197a611-3d0e-4a4b-a9a5-411ec027542b] Running
	I0731 11:05:42.146447   55115 system_pods.go:89] "kindnet-55rs5" [38bc5cf0-51a4-4b98-81b4-ed0963ba4d7d] Running
	I0731 11:05:42.146451   55115 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-033299" [95ac6ade-dfd0-4bee-aa1f-ea97ab7d5e73] Running
	I0731 11:05:42.146456   55115 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-033299" [483b1059-b82b-4569-b0c4-1738b85f0655] Running
	I0731 11:05:42.146459   55115 system_pods.go:89] "kube-proxy-5v8wj" [339370bf-8c8b-4523-9719-f10232ffd5e7] Running
	I0731 11:05:42.146463   55115 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-033299" [32d0b987-59c1-420f-856b-98d7470789a5] Running
	I0731 11:05:42.146467   55115 system_pods.go:89] "storage-provisioner" [7992d366-98b6-4458-a5a5-35e1f7f60f9e] Running
	I0731 11:05:42.146472   55115 system_pods.go:126] duration metric: took 203.665216ms to wait for k8s-apps to be running ...
	I0731 11:05:42.146478   55115 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 11:05:42.146522   55115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:05:42.157477   55115 system_svc.go:56] duration metric: took 10.989134ms WaitForService to wait for kubelet.
	I0731 11:05:42.157503   55115 kubeadm.go:581] duration metric: took 17.664020718s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 11:05:42.157519   55115 node_conditions.go:102] verifying NodePressure condition ...
	I0731 11:05:42.340906   55115 request.go:628] Waited for 183.327044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0731 11:05:42.343567   55115 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0731 11:05:42.343599   55115 node_conditions.go:123] node cpu capacity is 8
	I0731 11:05:42.343610   55115 node_conditions.go:105] duration metric: took 186.086787ms to run NodePressure ...
	I0731 11:05:42.343622   55115 start.go:228] waiting for startup goroutines ...
	I0731 11:05:42.343629   55115 start.go:233] waiting for cluster config update ...
	I0731 11:05:42.343640   55115 start.go:242] writing updated cluster config ...
	I0731 11:05:42.344070   55115 ssh_runner.go:195] Run: rm -f paused
	I0731 11:05:42.387783   55115 start.go:596] kubectl: 1.27.4, cluster: 1.18.20 (minor skew: 9)
	I0731 11:05:42.390107   55115 out.go:177] 
	W0731 11:05:42.391713   55115 out.go:239] ! /usr/local/bin/kubectl is version 1.27.4, which may have incompatibilities with Kubernetes 1.18.20.
	I0731 11:05:42.393349   55115 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0731 11:05:42.394913   55115 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-033299" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 11:08:43 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:43.433636382Z" level=info msg="Stopping pod sandbox: 2d7cb8d3c86fb2ac835f70a38be06f5f8840cd2b49bc2ba6874dce69b9307211" id=540eb1cb-ff04-4257-8059-99e44cede623 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:43 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:43.434568171Z" level=info msg="Stopped pod sandbox: 2d7cb8d3c86fb2ac835f70a38be06f5f8840cd2b49bc2ba6874dce69b9307211" id=540eb1cb-ff04-4257-8059-99e44cede623 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:43 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:43.794579553Z" level=info msg="Stopping pod sandbox: 2d7cb8d3c86fb2ac835f70a38be06f5f8840cd2b49bc2ba6874dce69b9307211" id=b24f418a-283e-46bf-bb3e-c82b5ba2e1c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:43 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:43.794636850Z" level=info msg="Stopped pod sandbox (already stopped): 2d7cb8d3c86fb2ac835f70a38be06f5f8840cd2b49bc2ba6874dce69b9307211" id=b24f418a-283e-46bf-bb3e-c82b5ba2e1c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:44 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:44.540942832Z" level=info msg="Stopping container: 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876 (timeout: 2s)" id=f8489999-7c81-47e6-aea1-4c9ab8a0bd1c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:44 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:44.543156368Z" level=info msg="Stopping container: 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876 (timeout: 2s)" id=c20a5193-bb44-411f-98b0-3ffa6e862b34 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.550403629Z" level=warning msg="Stopping container 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=f8489999-7c81-47e6-aea1-4c9ab8a0bd1c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:46 ingress-addon-legacy-033299 conmon[3413]: conmon 5639c897bdab2f908b9b <ninfo>: container 3425 exited with status 137
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.709756910Z" level=info msg="Stopped container 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s2hcr/controller" id=f8489999-7c81-47e6-aea1-4c9ab8a0bd1c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.709776082Z" level=info msg="Stopped container 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s2hcr/controller" id=c20a5193-bb44-411f-98b0-3ffa6e862b34 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.710422161Z" level=info msg="Stopping pod sandbox: ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=54c97e7c-a652-499f-987c-da40acb8e9f1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.710444845Z" level=info msg="Stopping pod sandbox: ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=3b9e2fcf-3041-4f04-b5da-1cc94c594b81 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.713253687Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3QHMFGLMN4BDDK62 - [0:0]\n:KUBE-HP-SZWUYCFIS4MIHTOZ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-SZWUYCFIS4MIHTOZ\n-X KUBE-HP-3QHMFGLMN4BDDK62\nCOMMIT\n"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.714530483Z" level=info msg="Closing host port tcp:80"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.714564488Z" level=info msg="Closing host port tcp:443"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.715510923Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.715525997Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.715633776Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-s2hcr Namespace:ingress-nginx ID:ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c UID:0b57f19e-d070-4e58-abb3-65ccd6e5c655 NetNS:/var/run/netns/55a69f57-e3bb-44e9-be8b-66aa6e534ed6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.715741953Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-s2hcr from CNI network \"kindnet\" (type=ptp)"
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.753437335Z" level=info msg="Stopped pod sandbox: ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=54c97e7c-a652-499f-987c-da40acb8e9f1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:46 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:46.753575389Z" level=info msg="Stopped pod sandbox (already stopped): ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=3b9e2fcf-3041-4f04-b5da-1cc94c594b81 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:47 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:47.433045461Z" level=info msg="Stopping container: 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876 (timeout: 2s)" id=4153632b-e671-4884-af12-fee229b023e4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:47 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:47.435798220Z" level=info msg="Stopped container 5639c897bdab2f908b9b663f8142008fa5f34b00439c88eb2caf47aaae205876: ingress-nginx/ingress-nginx-controller-7fcf777cb7-s2hcr/controller" id=4153632b-e671-4884-af12-fee229b023e4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 31 11:08:47 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:47.436167641Z" level=info msg="Stopping pod sandbox: ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=711649bd-67a9-4749-b3b5-4bc10934841f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 31 11:08:47 ingress-addon-legacy-033299 crio[955]: time="2023-07-31 11:08:47.436202703Z" level=info msg="Stopped pod sandbox (already stopped): ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" id=711649bd-67a9-4749-b3b5-4bc10934841f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	566857aed4b4a       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            24 seconds ago      Running             hello-world-app           0                   e7ce2b88a0b1f       hello-world-app-5f5d8b66bb-srcxf
	19ae6eb7e38b4       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   ed152103a0b67       nginx
	5639c897bdab2       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   ff60d629827a2       ingress-nginx-controller-7fcf777cb7-s2hcr
	cd8d59edf19d8       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   676bc7110c457       ingress-nginx-admission-patch-2wspk
	7733a36c76556       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   51251ea5233e4       ingress-nginx-admission-create-plrjf
	6d7b1db89256e       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   69f55e114e3ca       coredns-66bff467f8-zv98f
	546abdc9faea0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   c0f5ef1092178       storage-provisioner
	b26f08ab3705f       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   36967ffdfcd8e       kindnet-55rs5
	92d90e862c9eb       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   15dc1be33c9d0       kube-proxy-5v8wj
	ca96f2787d9c3       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   d1f9cf3e721e6       etcd-ingress-addon-legacy-033299
	5f6d3cc8e1e52       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   931875fb38958       kube-apiserver-ingress-addon-legacy-033299
	8a99b46925d23       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   39f5aa35efb32       kube-controller-manager-ingress-addon-legacy-033299
	d96e94430981d       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   90ab037ab848f       kube-scheduler-ingress-addon-legacy-033299
	
	* 
	* ==> coredns [6d7b1db89256e76d3f13c0622292fb7cfaad21adb4b7beaa4fa3db7eb9005a85] <==
	* [INFO] 10.244.0.5:53788 - 37818 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006666672s
	[INFO] 10.244.0.5:53125 - 52413 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005185307s
	[INFO] 10.244.0.5:57705 - 46382 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005513459s
	[INFO] 10.244.0.5:54312 - 48810 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005319689s
	[INFO] 10.244.0.5:55961 - 17655 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005501751s
	[INFO] 10.244.0.5:53788 - 46866 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005286214s
	[INFO] 10.244.0.5:51050 - 35142 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005345065s
	[INFO] 10.244.0.5:43535 - 39767 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005269864s
	[INFO] 10.244.0.5:57372 - 56703 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005652108s
	[INFO] 10.244.0.5:55961 - 8568 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006569445s
	[INFO] 10.244.0.5:57705 - 30376 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006762628s
	[INFO] 10.244.0.5:43535 - 47006 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006618648s
	[INFO] 10.244.0.5:53125 - 42259 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006953469s
	[INFO] 10.244.0.5:54312 - 62325 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006768484s
	[INFO] 10.244.0.5:51050 - 2497 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006775781s
	[INFO] 10.244.0.5:57372 - 17322 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006727773s
	[INFO] 10.244.0.5:57705 - 8954 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067066s
	[INFO] 10.244.0.5:55961 - 2846 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000214926s
	[INFO] 10.244.0.5:53788 - 34008 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006919134s
	[INFO] 10.244.0.5:51050 - 41751 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045084s
	[INFO] 10.244.0.5:57372 - 28998 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068667s
	[INFO] 10.244.0.5:53788 - 48199 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063262s
	[INFO] 10.244.0.5:53125 - 49884 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000293981s
	[INFO] 10.244.0.5:54312 - 44043 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000414218s
	[INFO] 10.244.0.5:43535 - 39245 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000502431s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-033299
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-033299
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=ingress-addon-legacy-033299
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T11_05_09_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:05:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-033299
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:08:39 +0000   Mon, 31 Jul 2023 11:05:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:08:39 +0000   Mon, 31 Jul 2023 11:05:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:08:39 +0000   Mon, 31 Jul 2023 11:05:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:08:39 +0000   Mon, 31 Jul 2023 11:05:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-033299
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c032191719a42f48f6bcdcc89adcb43
	  System UUID:                c6242b71-369f-4cb6-a659-32b1afc37b5f
	  Boot ID:                    c4e7adf1-530e-4fca-8214-6daedbc0c53f
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-srcxf                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-zv98f                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m28s
	  kube-system                 etcd-ingress-addon-legacy-033299                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kindnet-55rs5                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-033299             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-033299    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-proxy-5v8wj                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-033299             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m43s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s  kubelet     Node ingress-addon-legacy-033299 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s  kubelet     Node ingress-addon-legacy-033299 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s  kubelet     Node ingress-addon-legacy-033299 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m23s  kubelet     Node ingress-addon-legacy-033299 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004927] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006598] FS-Cache: N-cookie d=00000000b387d585{9p.inode} n=00000000ef355d8f
	[  +0.007366] FS-Cache: N-key=[8] '92a00f0200000000'
	[ +13.163366] FS-Cache: Duplicate cookie detected
	[  +0.004759] FS-Cache: O-cookie c=00000011 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006792] FS-Cache: O-cookie d=0000000033120b35{9P.session} n=0000000051ffe9f3
	[  +0.007565] FS-Cache: O-key=[10] '34323935353934333438'
	[  +0.005373] FS-Cache: N-cookie c=00000012 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007967] FS-Cache: N-cookie d=0000000033120b35{9P.session} n=000000003510cfd5
	[  +0.008908] FS-Cache: N-key=[10] '34323935353934333438'
	[  +9.008019] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul31 11:06] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +1.023871] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +2.015801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +4.255584] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +8.191207] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[ +16.126424] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[Jul31 11:07] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	
	* 
	* ==> etcd [ca96f2787d9c34545ecc793be4d9a0ecd162c5c28cfab35e58ed8e68d1fa65ec] <==
	* 2023-07-31 11:05:02.059251 W | auth: simple token is not cryptographically signed
	2023-07-31 11:05:02.062448 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-31 11:05:02.062906 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/31 11:05:02 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-31 11:05:02.063349 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-07-31 11:05:02.066018 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-31 11:05:02.066160 I | embed: listening for peers on 192.168.49.2:2380
	2023-07-31 11:05:02.066187 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/07/31 11:05:02 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/31 11:05:02 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/31 11:05:02 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/31 11:05:02 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/31 11:05:02 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-31 11:05:02.655030 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-31 11:05:02.656003 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-31 11:05:02.656066 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-31 11:05:02.656094 I | etcdserver: published {Name:ingress-addon-legacy-033299 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-31 11:05:02.656100 I | embed: ready to serve client requests
	2023-07-31 11:05:02.656114 I | embed: ready to serve client requests
	2023-07-31 11:05:02.658328 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-31 11:05:02.658583 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-31 11:05:29.881210 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-zv98f\" " with result "range_response_count:1 size:3753" took too long (342.14464ms) to execute
	2023-07-31 11:05:29.881506 W | etcdserver: request "header:<ID:8128022797704718801 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/ingress-addon-legacy-033299\" mod_revision:398 > success:<request_put:<key:\"/registry/minions/ingress-addon-legacy-033299\" value_size:6323 >> failure:<request_range:<key:\"/registry/minions/ingress-addon-legacy-033299\" > >>" with result "size:16" took too long (229.531212ms) to execute
	2023-07-31 11:05:29.881638 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-033299\" " with result "range_response_count:1 size:6390" took too long (288.928883ms) to execute
	2023-07-31 11:05:30.075598 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-zv98f\" " with result "range_response_count:1 size:3753" took too long (183.783313ms) to execute
	
	* 
	* ==> kernel <==
	*  11:08:52 up 51 min,  0 users,  load average: 0.06, 0.48, 0.38
	Linux ingress-addon-legacy-033299 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [b26f08ab3705fe070152fb9e97cc59a84e2290ac190dff41fb105cb72e4d55bf] <==
	* I0731 11:06:47.497918       1 main.go:227] handling current node
	I0731 11:06:57.502663       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:06:57.502689       1 main.go:227] handling current node
	I0731 11:07:07.514879       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:07.514903       1 main.go:227] handling current node
	I0731 11:07:17.518533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:17.518560       1 main.go:227] handling current node
	I0731 11:07:27.529027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:27.529054       1 main.go:227] handling current node
	I0731 11:07:37.531567       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:37.531592       1 main.go:227] handling current node
	I0731 11:07:47.543520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:47.543544       1 main.go:227] handling current node
	I0731 11:07:57.547248       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:07:57.547272       1 main.go:227] handling current node
	I0731 11:08:07.558872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:08:07.558896       1 main.go:227] handling current node
	I0731 11:08:17.562949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:08:17.562978       1 main.go:227] handling current node
	I0731 11:08:27.566466       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:08:27.566490       1 main.go:227] handling current node
	I0731 11:08:37.570848       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:08:37.570871       1 main.go:227] handling current node
	I0731 11:08:47.581778       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0731 11:08:47.581803       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5f6d3cc8e1e52d8aacd7b937534f3c015edfe1db2bca9ba5f1fda4d8de810085] <==
	* I0731 11:05:06.109104       1 cache.go:32] Waiting for caches to sync for autoregister controller
	E0731 11:05:06.122312       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0731 11:05:06.230291       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0731 11:05:06.230306       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0731 11:05:06.230944       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 11:05:06.231034       1 cache.go:39] Caches are synced for autoregister controller
	I0731 11:05:06.231045       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 11:05:07.104888       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0731 11:05:07.105076       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 11:05:07.109555       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0731 11:05:07.112426       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:05:07.112442       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0731 11:05:07.452584       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:05:07.479569       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0731 11:05:07.569575       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0731 11:05:07.570386       1 controller.go:609] quota admission added evaluator for: endpoints
	I0731 11:05:07.573445       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:05:08.404614       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0731 11:05:09.043548       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0731 11:05:09.210839       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0731 11:05:09.375866       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 11:05:24.290251       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0731 11:05:24.300344       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0731 11:05:43.038799       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0731 11:06:06.236999       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [8a99b46925d23056e4affb2b2961e5302120484d5754d0a73061c781ab29a132] <==
	* I0731 11:05:24.365863       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dee0f518-b9c1-48fb-9317-8ae1d85b2f1b", APIVersion:"apps/v1", ResourceVersion:"333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-29x8v
	I0731 11:05:24.424977       1 shared_informer.go:230] Caches are synced for endpoint 
	I0731 11:05:24.425805       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0731 11:05:24.474461       1 shared_informer.go:230] Caches are synced for HPA 
	I0731 11:05:24.496407       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8d2bf889-c733-4a2e-9b1c-2a317ef5a86f", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0731 11:05:24.508152       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dee0f518-b9c1-48fb-9317-8ae1d85b2f1b", APIVersion:"apps/v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-29x8v
	I0731 11:05:24.630182       1 shared_informer.go:230] Caches are synced for disruption 
	I0731 11:05:24.630220       1 disruption.go:339] Sending events to api server.
	I0731 11:05:24.634420       1 shared_informer.go:230] Caches are synced for stateful set 
	I0731 11:05:24.653588       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0731 11:05:24.730272       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:05:24.730277       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:05:24.730325       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0731 11:05:24.730342       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 11:05:24.730277       1 shared_informer.go:230] Caches are synced for resource quota 
	I0731 11:05:34.275731       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0731 11:05:42.998848       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"04eb235e-2bdf-4086-9eef-22b9d726650d", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0731 11:05:43.039542       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"fcc19c08-d666-4ba9-87fe-2393c1a3219b", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-s2hcr
	I0731 11:05:43.045120       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5ee485ef-4346-4476-99ea-717445721bad", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-plrjf
	I0731 11:05:43.059498       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3d478604-c6e4-4ebc-93d2-923363909c65", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-2wspk
	I0731 11:05:45.501326       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5ee485ef-4346-4476-99ea-717445721bad", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 11:05:45.507779       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3d478604-c6e4-4ebc-93d2-923363909c65", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0731 11:08:26.505475       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"de082e3a-760f-4583-a9d9-f09a27e31e39", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0731 11:08:26.511781       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"bc755065-368a-4803-a300-951b82551bff", APIVersion:"apps/v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-srcxf
	E0731 11:08:49.347078       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-58hhj" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [92d90e862c9eb5eeb789ef392e0fcb7d41796df9afa3dffbefee2ccfa61ab766] <==
	* W0731 11:05:25.259248       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0731 11:05:25.265171       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0731 11:05:25.265195       1 server_others.go:186] Using iptables Proxier.
	I0731 11:05:25.265426       1 server.go:583] Version: v1.18.20
	I0731 11:05:25.265793       1 config.go:315] Starting service config controller
	I0731 11:05:25.265815       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0731 11:05:25.265860       1 config.go:133] Starting endpoints config controller
	I0731 11:05:25.265939       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0731 11:05:25.365973       1 shared_informer.go:230] Caches are synced for service config 
	I0731 11:05:25.366112       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d96e94430981d7a28fc7b5eebbc398d058e0e265eaa2c9b78c4812731c4f0417] <==
	* I0731 11:05:06.147994       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 11:05:06.149849       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0731 11:05:06.151685       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:05:06.151713       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 11:05:06.154909       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0731 11:05:06.154972       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0731 11:05:06.230639       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:05:06.230733       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 11:05:06.230876       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 11:05:06.231055       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:05:06.231209       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:05:06.231276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 11:05:06.231415       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:05:06.231415       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:05:06.231591       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:05:06.231874       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:05:06.232271       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 11:05:06.232418       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:05:07.050014       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:05:07.087296       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 11:05:07.181223       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:05:07.331038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:05:07.331277       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:05:07.335476       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0731 11:05:10.551920       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Jul 31 11:08:11 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:11.433329    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:11 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:11.433367    1868 pod_workers.go:191] Error syncing pod 32702ed3-0fcb-475a-8678-140a0eee34a4 ("kube-ingress-dns-minikube_kube-system(32702ed3-0fcb-475a-8678-140a0eee34a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 31 11:08:24 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:24.433212    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:24 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:24.433256    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:24 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:24.433302    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:24 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:24.433328    1868 pod_workers.go:191] Error syncing pod 32702ed3-0fcb-475a-8678-140a0eee34a4 ("kube-ingress-dns-minikube_kube-system(32702ed3-0fcb-475a-8678-140a0eee34a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 31 11:08:26 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:26.518781    1868 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 31 11:08:26 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:26.651305    1868 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-4cjsh" (UniqueName: "kubernetes.io/secret/29070276-7cfc-4099-ae00-35d6e4ca30de-default-token-4cjsh") pod "hello-world-app-5f5d8b66bb-srcxf" (UID: "29070276-7cfc-4099-ae00-35d6e4ca30de")
	Jul 31 11:08:26 ingress-addon-legacy-033299 kubelet[1868]: W0731 11:08:26.876783    1868 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/7733f8361d74918fcfb2e5b8dda7d67b374d2128e853cabbccce35be0b4cd890/crio-e7ce2b88a0b1fa54ddb7193b6d2b937ecc9f33d0da2efbe8849b27c5da61769d WatchSource:0}: Error finding container e7ce2b88a0b1fa54ddb7193b6d2b937ecc9f33d0da2efbe8849b27c5da61769d: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0016466e0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jul 31 11:08:37 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:37.433202    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:37 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:37.433245    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:37 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:37.433317    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 31 11:08:37 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:37.433358    1868 pod_workers.go:191] Error syncing pod 32702ed3-0fcb-475a-8678-140a0eee34a4 ("kube-ingress-dns-minikube_kube-system(32702ed3-0fcb-475a-8678-140a0eee34a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 31 11:08:42 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:42.286961    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2f2d4" (UniqueName: "kubernetes.io/secret/32702ed3-0fcb-475a-8678-140a0eee34a4-minikube-ingress-dns-token-2f2d4") pod "32702ed3-0fcb-475a-8678-140a0eee34a4" (UID: "32702ed3-0fcb-475a-8678-140a0eee34a4")
	Jul 31 11:08:42 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:42.288937    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32702ed3-0fcb-475a-8678-140a0eee34a4-minikube-ingress-dns-token-2f2d4" (OuterVolumeSpecName: "minikube-ingress-dns-token-2f2d4") pod "32702ed3-0fcb-475a-8678-140a0eee34a4" (UID: "32702ed3-0fcb-475a-8678-140a0eee34a4"). InnerVolumeSpecName "minikube-ingress-dns-token-2f2d4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:08:42 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:42.387267    1868 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2f2d4" (UniqueName: "kubernetes.io/secret/32702ed3-0fcb-475a-8678-140a0eee34a4-minikube-ingress-dns-token-2f2d4") on node "ingress-addon-legacy-033299" DevicePath ""
	Jul 31 11:08:44 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:44.542131    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s2hcr.1776ef35d0db5a6e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s2hcr", UID:"0b57f19e-d070-4e58-abb3-65ccd6e5c655", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-033299"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a036f2038026e, ext:215533776452, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a036f2038026e, ext:215533776452, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s2hcr.1776ef35d0db5a6e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 11:08:44 ingress-addon-legacy-033299 kubelet[1868]: E0731 11:08:44.545739    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s2hcr.1776ef35d0db5a6e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s2hcr", UID:"0b57f19e-d070-4e58-abb3-65ccd6e5c655", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-033299"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12a036f2038026e, ext:215533776452, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12a036f205be79d, ext:215536128889, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s2hcr.1776ef35d0db5a6e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 31 11:08:46 ingress-addon-legacy-033299 kubelet[1868]: W0731 11:08:46.790798    1868 pod_container_deletor.go:77] Container "ff60d629827a2721763ec78644e59dbc37cdcf0282932d8c3181291714976f4c" not found in pod's containers
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.740426    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-webhook-cert") pod "0b57f19e-d070-4e58-abb3-65ccd6e5c655" (UID: "0b57f19e-d070-4e58-abb3-65ccd6e5c655")
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.740472    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-gs5tf" (UniqueName: "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-ingress-nginx-token-gs5tf") pod "0b57f19e-d070-4e58-abb3-65ccd6e5c655" (UID: "0b57f19e-d070-4e58-abb3-65ccd6e5c655")
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.742343    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-ingress-nginx-token-gs5tf" (OuterVolumeSpecName: "ingress-nginx-token-gs5tf") pod "0b57f19e-d070-4e58-abb3-65ccd6e5c655" (UID: "0b57f19e-d070-4e58-abb3-65ccd6e5c655"). InnerVolumeSpecName "ingress-nginx-token-gs5tf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.742386    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0b57f19e-d070-4e58-abb3-65ccd6e5c655" (UID: "0b57f19e-d070-4e58-abb3-65ccd6e5c655"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.840744    1868 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-webhook-cert") on node "ingress-addon-legacy-033299" DevicePath ""
	Jul 31 11:08:48 ingress-addon-legacy-033299 kubelet[1868]: I0731 11:08:48.840791    1868 reconciler.go:319] Volume detached for volume "ingress-nginx-token-gs5tf" (UniqueName: "kubernetes.io/secret/0b57f19e-d070-4e58-abb3-65ccd6e5c655-ingress-nginx-token-gs5tf") on node "ingress-addon-legacy-033299" DevicePath ""
	
	* 
	* ==> storage-provisioner [546abdc9faea064566dff1eb4584834b45b4917ae75d6e4b8bdf41a0a8e4dd12] <==
	* I0731 11:05:31.045711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 11:05:31.052502       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 11:05:31.052551       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 11:05:31.079833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 11:05:31.080012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-033299_0d932720-7c64-443e-a3cc-f122bab23c8e!
	I0731 11:05:31.080079       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16176296-bda0-45c8-ad35-75790040e40a", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-033299_0d932720-7c64-443e-a3cc-f122bab23c8e became leader
	I0731 11:05:31.180558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-033299_0d932720-7c64-443e-a3cc-f122bab23c8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-033299 -n ingress-addon-legacy-033299
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-033299 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- sh -c "ping -c 1 192.168.58.1": exit status 1 (165.224168ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-fvzbv): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- sh -c "ping -c 1 192.168.58.1": exit status 1 (159.454849ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-nhmrt): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-249026
helpers_test.go:235: (dbg) docker inspect multinode-249026:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff",
	        "Created": "2023-07-31T11:13:52.729440024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 101276,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:13:53.029199811Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/hosts",
	        "LogPath": "/var/lib/docker/containers/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff-json.log",
	        "Name": "/multinode-249026",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-249026:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-249026",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/afaa63b78421fd24053418306f8a8210d5752d73482ffddda4b09a90b3825d3e-init/diff:/var/lib/docker/overlay2/024d10bc12a315dda5382be7dcc437728fbe4eb773f76ea4124e9f17d757e8de/diff",
	                "MergedDir": "/var/lib/docker/overlay2/afaa63b78421fd24053418306f8a8210d5752d73482ffddda4b09a90b3825d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/afaa63b78421fd24053418306f8a8210d5752d73482ffddda4b09a90b3825d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/afaa63b78421fd24053418306f8a8210d5752d73482ffddda4b09a90b3825d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-249026",
	                "Source": "/var/lib/docker/volumes/multinode-249026/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-249026",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-249026",
	                "name.minikube.sigs.k8s.io": "multinode-249026",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "038dee724120fe877cf4d223450ab78e21bf53028aacb2d088c9e33c97ab1dc5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/038dee724120",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-249026": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6c9e307d8fbb",
	                        "multinode-249026"
	                    ],
	                    "NetworkID": "49120a2e135d005ed398e4c0d15d114c6d8d3ba0f1c1fd5bdb98fc3c09adaadd",
	                    "EndpointID": "001e427a29ecb1ce4b2944bf2bc610da1f37011a933aaec5ba841cd751c8fa9f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-249026 -n multinode-249026
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-249026 logs -n 25: (1.217715496s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-906460                           | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-906460 ssh -- ls                    | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-861088                           | mount-start-1-861088 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-906460 ssh -- ls                    | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-906460                           | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	| start   | -p mount-start-2-906460                           | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	| ssh     | mount-start-2-906460 ssh -- ls                    | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-906460                           | mount-start-2-906460 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	| delete  | -p mount-start-1-861088                           | mount-start-1-861088 | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:13 UTC |
	| start   | -p multinode-249026                               | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:13 UTC | 31 Jul 23 11:15 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- apply -f                   | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- rollout                    | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- get pods -o                | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- get pods -o                | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-fvzbv --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-nhmrt --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-fvzbv --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-nhmrt --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-fvzbv -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-nhmrt -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- get pods -o                | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-fvzbv                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC |                     |
	|         | busybox-67b7f59bb-fvzbv -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC | 31 Jul 23 11:15 UTC |
	|         | busybox-67b7f59bb-nhmrt                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-249026 -- exec                       | multinode-249026     | jenkins | v1.31.1 | 31 Jul 23 11:15 UTC |                     |
	|         | busybox-67b7f59bb-nhmrt -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 11:13:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:13:46.859057  100669 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:13:46.859146  100669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:13:46.859153  100669 out.go:309] Setting ErrFile to fd 2...
	I0731 11:13:46.859158  100669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:13:46.859347  100669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:13:46.859921  100669 out.go:303] Setting JSON to false
	I0731 11:13:46.861227  100669 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3378,"bootTime":1690798649,"procs":833,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:13:46.861290  100669 start.go:138] virtualization: kvm guest
	I0731 11:13:46.863678  100669 out.go:177] * [multinode-249026] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:13:46.865345  100669 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:13:46.865315  100669 notify.go:220] Checking for updates...
	I0731 11:13:46.866881  100669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:13:46.868576  100669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:13:46.870860  100669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:13:46.872248  100669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:13:46.873440  100669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:13:46.874866  100669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:13:46.897840  100669 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:13:46.897919  100669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:13:46.949068  100669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-31 11:13:46.94079109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:13:46.949160  100669 docker.go:294] overlay module found
	I0731 11:13:46.951267  100669 out.go:177] * Using the docker driver based on user configuration
	I0731 11:13:46.952874  100669 start.go:298] selected driver: docker
	I0731 11:13:46.952892  100669 start.go:898] validating driver "docker" against <nil>
	I0731 11:13:46.952903  100669 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:13:46.953619  100669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:13:47.007676  100669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-31 11:13:46.999138753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:13:47.007833  100669 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 11:13:47.008071  100669 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:13:47.009759  100669 out.go:177] * Using Docker driver with root privileges
	I0731 11:13:47.011248  100669 cni.go:84] Creating CNI manager for ""
	I0731 11:13:47.011262  100669 cni.go:136] 0 nodes found, recommending kindnet
	I0731 11:13:47.011270  100669 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:13:47.011285  100669 start_flags.go:319] config:
	{Name:multinode-249026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:13:47.012881  100669 out.go:177] * Starting control plane node multinode-249026 in cluster multinode-249026
	I0731 11:13:47.014120  100669 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:13:47.015488  100669 out.go:177] * Pulling base image ...
	I0731 11:13:47.016808  100669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:13:47.016840  100669 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0731 11:13:47.016849  100669 cache.go:57] Caching tarball of preloaded images
	I0731 11:13:47.016888  100669 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:13:47.016947  100669 preload.go:174] Found /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 11:13:47.016960  100669 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 11:13:47.017280  100669 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json ...
	I0731 11:13:47.017304  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json: {Name:mkfd75c03c3e970503fcd0561a888edea2666cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:47.032454  100669 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:13:47.032489  100669 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 11:13:47.032507  100669 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:13:47.032534  100669 start.go:365] acquiring machines lock for multinode-249026: {Name:mkf6b5487b2f589c713c0308a623dd01c78dbd5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:13:47.032622  100669 start.go:369] acquired machines lock for "multinode-249026" in 70.75µs
	I0731 11:13:47.032646  100669 start.go:93] Provisioning new machine with config: &{Name:multinode-249026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:13:47.032726  100669 start.go:125] createHost starting for "" (driver="docker")
	I0731 11:13:47.034585  100669 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 11:13:47.034765  100669 start.go:159] libmachine.API.Create for "multinode-249026" (driver="docker")
	I0731 11:13:47.034789  100669 client.go:168] LocalClient.Create starting
	I0731 11:13:47.034865  100669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem
	I0731 11:13:47.034894  100669 main.go:141] libmachine: Decoding PEM data...
	I0731 11:13:47.034911  100669 main.go:141] libmachine: Parsing certificate...
	I0731 11:13:47.034958  100669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem
	I0731 11:13:47.034978  100669 main.go:141] libmachine: Decoding PEM data...
	I0731 11:13:47.034991  100669 main.go:141] libmachine: Parsing certificate...
	I0731 11:13:47.035320  100669 cli_runner.go:164] Run: docker network inspect multinode-249026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 11:13:47.050720  100669 cli_runner.go:211] docker network inspect multinode-249026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 11:13:47.050772  100669 network_create.go:281] running [docker network inspect multinode-249026] to gather additional debugging logs...
	I0731 11:13:47.050788  100669 cli_runner.go:164] Run: docker network inspect multinode-249026
	W0731 11:13:47.065841  100669 cli_runner.go:211] docker network inspect multinode-249026 returned with exit code 1
	I0731 11:13:47.065876  100669 network_create.go:284] error running [docker network inspect multinode-249026]: docker network inspect multinode-249026: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-249026 not found
	I0731 11:13:47.065890  100669 network_create.go:286] output of [docker network inspect multinode-249026]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-249026 not found
	
	** /stderr **
	I0731 11:13:47.065926  100669 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:13:47.081592  100669 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32c42db2927b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:33:93:48:16} reservation:<nil>}
	I0731 11:13:47.082093  100669 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014037b0}
	I0731 11:13:47.082121  100669 network_create.go:123] attempt to create docker network multinode-249026 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0731 11:13:47.082163  100669 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-249026 multinode-249026
	I0731 11:13:47.133303  100669 network_create.go:107] docker network multinode-249026 192.168.58.0/24 created
	I0731 11:13:47.133338  100669 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-249026" container
	I0731 11:13:47.133397  100669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:13:47.149578  100669 cli_runner.go:164] Run: docker volume create multinode-249026 --label name.minikube.sigs.k8s.io=multinode-249026 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:13:47.165909  100669 oci.go:103] Successfully created a docker volume multinode-249026
	I0731 11:13:47.165986  100669 cli_runner.go:164] Run: docker run --rm --name multinode-249026-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-249026 --entrypoint /usr/bin/test -v multinode-249026:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 11:13:47.652228  100669 oci.go:107] Successfully prepared a docker volume multinode-249026
	I0731 11:13:47.652278  100669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:13:47.652305  100669 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:13:47.652359  100669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-249026:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:13:52.665201  100669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-249026:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.01277301s)
	I0731 11:13:52.665240  100669 kic.go:199] duration metric: took 5.012939 seconds to extract preloaded images to volume
	W0731 11:13:52.665362  100669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:13:52.665457  100669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:13:52.715151  100669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-249026 --name multinode-249026 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-249026 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-249026 --network multinode-249026 --ip 192.168.58.2 --volume multinode-249026:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:13:53.037022  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Running}}
	I0731 11:13:53.053872  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:13:53.071961  100669 cli_runner.go:164] Run: docker exec multinode-249026 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:13:53.133060  100669 oci.go:144] the created container "multinode-249026" has a running status.
	I0731 11:13:53.133099  100669 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa...
	I0731 11:13:53.235984  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 11:13:53.236028  100669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:13:53.255454  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:13:53.272160  100669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:13:53.272182  100669 kic_runner.go:114] Args: [docker exec --privileged multinode-249026 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:13:53.346142  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:13:53.364033  100669 machine.go:88] provisioning docker machine ...
	I0731 11:13:53.364068  100669 ubuntu.go:169] provisioning hostname "multinode-249026"
	I0731 11:13:53.364129  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:53.383164  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:13:53.383582  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0731 11:13:53.383600  100669 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-249026 && echo "multinode-249026" | sudo tee /etc/hostname
	I0731 11:13:53.384128  100669 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42646->127.0.0.1:32847: read: connection reset by peer
	I0731 11:13:56.522195  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-249026
	
	I0731 11:13:56.522281  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:56.538306  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:13:56.538727  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0731 11:13:56.538746  100669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-249026' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-249026/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-249026' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:13:56.663844  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:13:56.663892  100669 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 11:13:56.663933  100669 ubuntu.go:177] setting up certificates
	I0731 11:13:56.663945  100669 provision.go:83] configureAuth start
	I0731 11:13:56.664001  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026
	I0731 11:13:56.681837  100669 provision.go:138] copyHostCerts
	I0731 11:13:56.681870  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:13:56.681896  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem, removing ...
	I0731 11:13:56.681901  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:13:56.681961  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 11:13:56.682026  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:13:56.682042  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem, removing ...
	I0731 11:13:56.682046  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:13:56.682068  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 11:13:56.682107  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:13:56.682122  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem, removing ...
	I0731 11:13:56.682128  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:13:56.682150  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 11:13:56.682196  100669 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.multinode-249026 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-249026]
	I0731 11:13:56.835678  100669 provision.go:172] copyRemoteCerts
	I0731 11:13:56.835734  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:13:56.835765  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:56.852069  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:13:56.944233  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 11:13:56.944287  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:13:56.964828  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 11:13:56.964885  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 11:13:56.985960  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 11:13:56.986009  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:13:57.007254  100669 provision.go:86] duration metric: configureAuth took 343.294921ms
	I0731 11:13:57.007280  100669 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:13:57.007457  100669 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:13:57.007551  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:57.023300  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:13:57.023698  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0731 11:13:57.023714  100669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:13:57.235320  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:13:57.235346  100669 machine.go:91] provisioned docker machine in 3.871289791s
	I0731 11:13:57.235357  100669 client.go:171] LocalClient.Create took 10.200564404s
	I0731 11:13:57.235380  100669 start.go:167] duration metric: libmachine.API.Create for "multinode-249026" took 10.200613428s
	I0731 11:13:57.235388  100669 start.go:300] post-start starting for "multinode-249026" (driver="docker")
	I0731 11:13:57.235401  100669 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:13:57.235481  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:13:57.235529  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:57.251454  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:13:57.344484  100669 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:13:57.347209  100669 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0731 11:13:57.347229  100669 command_runner.go:130] > NAME="Ubuntu"
	I0731 11:13:57.347235  100669 command_runner.go:130] > VERSION_ID="22.04"
	I0731 11:13:57.347240  100669 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0731 11:13:57.347248  100669 command_runner.go:130] > VERSION_CODENAME=jammy
	I0731 11:13:57.347252  100669 command_runner.go:130] > ID=ubuntu
	I0731 11:13:57.347255  100669 command_runner.go:130] > ID_LIKE=debian
	I0731 11:13:57.347260  100669 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0731 11:13:57.347267  100669 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0731 11:13:57.347301  100669 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0731 11:13:57.347315  100669 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0731 11:13:57.347319  100669 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0731 11:13:57.347359  100669 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:13:57.347394  100669 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:13:57.347410  100669 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:13:57.347419  100669 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 11:13:57.347428  100669 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 11:13:57.347485  100669 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 11:13:57.347578  100669 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> 156462.pem in /etc/ssl/certs
	I0731 11:13:57.347587  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /etc/ssl/certs/156462.pem
	I0731 11:13:57.347683  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:13:57.355136  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:13:57.375460  100669 start.go:303] post-start completed in 140.058365ms
	I0731 11:13:57.375791  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026
	I0731 11:13:57.392700  100669 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json ...
	I0731 11:13:57.392959  100669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:13:57.393006  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:57.408483  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:13:57.500322  100669 command_runner.go:130] > 21%!
	(MISSING)I0731 11:13:57.500521  100669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:13:57.504492  100669 command_runner.go:130] > 230G
	I0731 11:13:57.504517  100669 start.go:128] duration metric: createHost completed in 10.47178279s
	I0731 11:13:57.504527  100669 start.go:83] releasing machines lock for "multinode-249026", held for 10.471895231s
	I0731 11:13:57.504590  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026
	I0731 11:13:57.520628  100669 ssh_runner.go:195] Run: cat /version.json
	I0731 11:13:57.520686  100669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:13:57.520689  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:57.520740  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:13:57.537992  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:13:57.538429  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:13:57.623594  100669 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0731 11:13:57.623760  100669 ssh_runner.go:195] Run: systemctl --version
	I0731 11:13:57.724385  100669 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 11:13:57.724427  100669 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0731 11:13:57.724450  100669 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0731 11:13:57.724504  100669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:13:57.860492  100669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:13:57.864363  100669 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0731 11:13:57.864381  100669 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0731 11:13:57.864387  100669 command_runner.go:130] > Device: 37h/55d	Inode: 552416      Links: 1
	I0731 11:13:57.864393  100669 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:13:57.864399  100669 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0731 11:13:57.864404  100669 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0731 11:13:57.864408  100669 command_runner.go:130] > Change: 2023-07-31 10:55:52.254677710 +0000
	I0731 11:13:57.864414  100669 command_runner.go:130] >  Birth: 2023-07-31 10:55:52.254677710 +0000
	I0731 11:13:57.864599  100669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:13:57.881251  100669 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:13:57.881334  100669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:13:57.906753  100669 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0731 11:13:57.906796  100669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 11:13:57.906805  100669 start.go:466] detecting cgroup driver to use...
	I0731 11:13:57.906837  100669 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:13:57.906883  100669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:13:57.919987  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:13:57.929721  100669 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:13:57.929765  100669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:13:57.941468  100669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:13:57.953556  100669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 11:13:58.028169  100669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:13:58.111713  100669 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0731 11:13:58.111741  100669 docker.go:212] disabling docker service ...
	I0731 11:13:58.111776  100669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:13:58.128129  100669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:13:58.137682  100669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:13:58.215810  100669 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0731 11:13:58.215874  100669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:13:58.226155  100669 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0731 11:13:58.292330  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:13:58.302063  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:13:58.314714  100669 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 11:13:58.315396  100669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 11:13:58.315444  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:13:58.323686  100669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 11:13:58.323735  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:13:58.331730  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:13:58.339451  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:13:58.347354  100669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:13:58.354994  100669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:13:58.361810  100669 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 11:13:58.362458  100669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:13:58.369627  100669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:13:58.442791  100669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 11:13:58.538457  100669 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 11:13:58.538512  100669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 11:13:58.541827  100669 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 11:13:58.541848  100669 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 11:13:58.541857  100669 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0731 11:13:58.541868  100669 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:13:58.541877  100669 command_runner.go:130] > Access: 2023-07-31 11:13:58.523912185 +0000
	I0731 11:13:58.541887  100669 command_runner.go:130] > Modify: 2023-07-31 11:13:58.523912185 +0000
	I0731 11:13:58.541898  100669 command_runner.go:130] > Change: 2023-07-31 11:13:58.523912185 +0000
	I0731 11:13:58.541905  100669 command_runner.go:130] >  Birth: -
	I0731 11:13:58.541946  100669 start.go:534] Will wait 60s for crictl version
	I0731 11:13:58.542015  100669 ssh_runner.go:195] Run: which crictl
	I0731 11:13:58.544901  100669 command_runner.go:130] > /usr/bin/crictl
	I0731 11:13:58.544999  100669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:13:58.576048  100669 command_runner.go:130] > Version:  0.1.0
	I0731 11:13:58.576067  100669 command_runner.go:130] > RuntimeName:  cri-o
	I0731 11:13:58.576072  100669 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0731 11:13:58.576077  100669 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 11:13:58.576092  100669 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 11:13:58.576143  100669 ssh_runner.go:195] Run: crio --version
	I0731 11:13:58.608090  100669 command_runner.go:130] > crio version 1.24.6
	I0731 11:13:58.608108  100669 command_runner.go:130] > Version:          1.24.6
	I0731 11:13:58.608118  100669 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 11:13:58.608122  100669 command_runner.go:130] > GitTreeState:     clean
	I0731 11:13:58.608128  100669 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 11:13:58.608133  100669 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 11:13:58.608136  100669 command_runner.go:130] > Compiler:         gc
	I0731 11:13:58.608140  100669 command_runner.go:130] > Platform:         linux/amd64
	I0731 11:13:58.608145  100669 command_runner.go:130] > Linkmode:         dynamic
	I0731 11:13:58.608152  100669 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 11:13:58.608161  100669 command_runner.go:130] > SeccompEnabled:   true
	I0731 11:13:58.608167  100669 command_runner.go:130] > AppArmorEnabled:  false
	I0731 11:13:58.609715  100669 ssh_runner.go:195] Run: crio --version
	I0731 11:13:58.640985  100669 command_runner.go:130] > crio version 1.24.6
	I0731 11:13:58.641003  100669 command_runner.go:130] > Version:          1.24.6
	I0731 11:13:58.641010  100669 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 11:13:58.641014  100669 command_runner.go:130] > GitTreeState:     clean
	I0731 11:13:58.641020  100669 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 11:13:58.641024  100669 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 11:13:58.641028  100669 command_runner.go:130] > Compiler:         gc
	I0731 11:13:58.641032  100669 command_runner.go:130] > Platform:         linux/amd64
	I0731 11:13:58.641037  100669 command_runner.go:130] > Linkmode:         dynamic
	I0731 11:13:58.641051  100669 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 11:13:58.641056  100669 command_runner.go:130] > SeccompEnabled:   true
	I0731 11:13:58.641060  100669 command_runner.go:130] > AppArmorEnabled:  false
	I0731 11:13:58.643123  100669 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 11:13:58.644536  100669 cli_runner.go:164] Run: docker network inspect multinode-249026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:13:58.661251  100669 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0731 11:13:58.665005  100669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:13:58.675094  100669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:13:58.675144  100669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:13:58.722642  100669 command_runner.go:130] > {
	I0731 11:13:58.722661  100669 command_runner.go:130] >   "images": [
	I0731 11:13:58.722666  100669 command_runner.go:130] >     {
	I0731 11:13:58.722673  100669 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0731 11:13:58.722678  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.722689  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0731 11:13:58.722692  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722697  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.722705  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0731 11:13:58.722712  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0731 11:13:58.722719  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722724  100669 command_runner.go:130] >       "size": "65249302",
	I0731 11:13:58.722731  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.722739  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.722745  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.722751  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.722755  100669 command_runner.go:130] >     },
	I0731 11:13:58.722759  100669 command_runner.go:130] >     {
	I0731 11:13:58.722765  100669 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 11:13:58.722771  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.722776  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 11:13:58.722782  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722787  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.722794  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 11:13:58.722803  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 11:13:58.722806  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722812  100669 command_runner.go:130] >       "size": "31470524",
	I0731 11:13:58.722818  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.722822  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.722828  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.722832  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.722839  100669 command_runner.go:130] >     },
	I0731 11:13:58.722844  100669 command_runner.go:130] >     {
	I0731 11:13:58.722850  100669 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0731 11:13:58.722856  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.722861  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0731 11:13:58.722867  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722871  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.722880  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0731 11:13:58.722889  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0731 11:13:58.722893  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722897  100669 command_runner.go:130] >       "size": "53621675",
	I0731 11:13:58.722903  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.722907  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.722914  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.722918  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.722924  100669 command_runner.go:130] >     },
	I0731 11:13:58.722928  100669 command_runner.go:130] >     {
	I0731 11:13:58.722936  100669 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0731 11:13:58.722945  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.722952  100669 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0731 11:13:58.722956  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722962  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.722969  100669 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0731 11:13:58.722977  100669 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0731 11:13:58.722990  100669 command_runner.go:130] >       ],
	I0731 11:13:58.722998  100669 command_runner.go:130] >       "size": "297083935",
	I0731 11:13:58.723002  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.723009  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.723013  100669 command_runner.go:130] >       },
	I0731 11:13:58.723019  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723023  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723029  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723033  100669 command_runner.go:130] >     },
	I0731 11:13:58.723038  100669 command_runner.go:130] >     {
	I0731 11:13:58.723045  100669 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0731 11:13:58.723051  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.723059  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0731 11:13:58.723064  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723069  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.723078  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0731 11:13:58.723085  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0731 11:13:58.723091  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723095  100669 command_runner.go:130] >       "size": "122065872",
	I0731 11:13:58.723101  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.723106  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.723112  100669 command_runner.go:130] >       },
	I0731 11:13:58.723116  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723122  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723127  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723133  100669 command_runner.go:130] >     },
	I0731 11:13:58.723137  100669 command_runner.go:130] >     {
	I0731 11:13:58.723145  100669 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0731 11:13:58.723151  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.723157  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0731 11:13:58.723168  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723174  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.723181  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0731 11:13:58.723191  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0731 11:13:58.723196  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723201  100669 command_runner.go:130] >       "size": "113919286",
	I0731 11:13:58.723206  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.723210  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.723217  100669 command_runner.go:130] >       },
	I0731 11:13:58.723221  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723227  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723231  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723237  100669 command_runner.go:130] >     },
	I0731 11:13:58.723241  100669 command_runner.go:130] >     {
	I0731 11:13:58.723249  100669 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0731 11:13:58.723256  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.723261  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0731 11:13:58.723267  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723274  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.723283  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0731 11:13:58.723292  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0731 11:13:58.723298  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723302  100669 command_runner.go:130] >       "size": "72713623",
	I0731 11:13:58.723309  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.723313  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723317  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723323  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723327  100669 command_runner.go:130] >     },
	I0731 11:13:58.723332  100669 command_runner.go:130] >     {
	I0731 11:13:58.723338  100669 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0731 11:13:58.723345  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.723350  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0731 11:13:58.723355  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723360  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.723396  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0731 11:13:58.723407  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0731 11:13:58.723412  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723417  100669 command_runner.go:130] >       "size": "59811126",
	I0731 11:13:58.723420  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.723425  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.723429  100669 command_runner.go:130] >       },
	I0731 11:13:58.723435  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723439  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723444  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723449  100669 command_runner.go:130] >     },
	I0731 11:13:58.723453  100669 command_runner.go:130] >     {
	I0731 11:13:58.723459  100669 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 11:13:58.723465  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.723470  100669 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 11:13:58.723476  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723481  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.723495  100669 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 11:13:58.723509  100669 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 11:13:58.723515  100669 command_runner.go:130] >       ],
	I0731 11:13:58.723523  100669 command_runner.go:130] >       "size": "750414",
	I0731 11:13:58.723527  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.723531  100669 command_runner.go:130] >         "value": "65535"
	I0731 11:13:58.723534  100669 command_runner.go:130] >       },
	I0731 11:13:58.723538  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.723544  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.723549  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.723555  100669 command_runner.go:130] >     }
	I0731 11:13:58.723558  100669 command_runner.go:130] >   ]
	I0731 11:13:58.723564  100669 command_runner.go:130] > }
	I0731 11:13:58.725211  100669 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 11:13:58.725229  100669 crio.go:415] Images already preloaded, skipping extraction
	I0731 11:13:58.725280  100669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 11:13:58.755780  100669 command_runner.go:130] > {
	I0731 11:13:58.755806  100669 command_runner.go:130] >   "images": [
	I0731 11:13:58.755812  100669 command_runner.go:130] >     {
	I0731 11:13:58.755825  100669 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0731 11:13:58.755833  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.755851  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0731 11:13:58.755856  100669 command_runner.go:130] >       ],
	I0731 11:13:58.755866  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.755896  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0731 11:13:58.755914  100669 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0731 11:13:58.755920  100669 command_runner.go:130] >       ],
	I0731 11:13:58.755931  100669 command_runner.go:130] >       "size": "65249302",
	I0731 11:13:58.755940  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.755946  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.755957  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.755961  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.755967  100669 command_runner.go:130] >     },
	I0731 11:13:58.755970  100669 command_runner.go:130] >     {
	I0731 11:13:58.755979  100669 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 11:13:58.755983  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.755988  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 11:13:58.755991  100669 command_runner.go:130] >       ],
	I0731 11:13:58.755996  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756003  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 11:13:58.756009  100669 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 11:13:58.756013  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756019  100669 command_runner.go:130] >       "size": "31470524",
	I0731 11:13:58.756023  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.756028  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756032  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756036  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756039  100669 command_runner.go:130] >     },
	I0731 11:13:58.756044  100669 command_runner.go:130] >     {
	I0731 11:13:58.756049  100669 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0731 11:13:58.756053  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756058  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0731 11:13:58.756061  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756065  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756072  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0731 11:13:58.756079  100669 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0731 11:13:58.756082  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756086  100669 command_runner.go:130] >       "size": "53621675",
	I0731 11:13:58.756090  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.756093  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756097  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756101  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756104  100669 command_runner.go:130] >     },
	I0731 11:13:58.756107  100669 command_runner.go:130] >     {
	I0731 11:13:58.756113  100669 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0731 11:13:58.756118  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756123  100669 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0731 11:13:58.756126  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756131  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756138  100669 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0731 11:13:58.756147  100669 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0731 11:13:58.756154  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756158  100669 command_runner.go:130] >       "size": "297083935",
	I0731 11:13:58.756164  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.756168  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.756172  100669 command_runner.go:130] >       },
	I0731 11:13:58.756176  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756180  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756184  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756188  100669 command_runner.go:130] >     },
	I0731 11:13:58.756191  100669 command_runner.go:130] >     {
	I0731 11:13:58.756197  100669 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0731 11:13:58.756203  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756208  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0731 11:13:58.756215  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756220  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756227  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0731 11:13:58.756236  100669 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0731 11:13:58.756240  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756244  100669 command_runner.go:130] >       "size": "122065872",
	I0731 11:13:58.756251  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.756255  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.756261  100669 command_runner.go:130] >       },
	I0731 11:13:58.756265  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756269  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756272  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756276  100669 command_runner.go:130] >     },
	I0731 11:13:58.756279  100669 command_runner.go:130] >     {
	I0731 11:13:58.756286  100669 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0731 11:13:58.756292  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756297  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0731 11:13:58.756300  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756304  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756314  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0731 11:13:58.756321  100669 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0731 11:13:58.756327  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756331  100669 command_runner.go:130] >       "size": "113919286",
	I0731 11:13:58.756334  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.756340  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.756344  100669 command_runner.go:130] >       },
	I0731 11:13:58.756348  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756352  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756356  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756359  100669 command_runner.go:130] >     },
	I0731 11:13:58.756362  100669 command_runner.go:130] >     {
	I0731 11:13:58.756368  100669 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0731 11:13:58.756375  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756379  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0731 11:13:58.756383  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756386  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756396  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0731 11:13:58.756403  100669 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0731 11:13:58.756406  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756410  100669 command_runner.go:130] >       "size": "72713623",
	I0731 11:13:58.756414  100669 command_runner.go:130] >       "uid": null,
	I0731 11:13:58.756418  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756422  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756426  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756429  100669 command_runner.go:130] >     },
	I0731 11:13:58.756432  100669 command_runner.go:130] >     {
	I0731 11:13:58.756440  100669 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0731 11:13:58.756447  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756451  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0731 11:13:58.756455  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756458  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756473  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0731 11:13:58.756482  100669 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0731 11:13:58.756486  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756490  100669 command_runner.go:130] >       "size": "59811126",
	I0731 11:13:58.756494  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.756498  100669 command_runner.go:130] >         "value": "0"
	I0731 11:13:58.756501  100669 command_runner.go:130] >       },
	I0731 11:13:58.756505  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756509  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756513  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756517  100669 command_runner.go:130] >     },
	I0731 11:13:58.756520  100669 command_runner.go:130] >     {
	I0731 11:13:58.756526  100669 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 11:13:58.756532  100669 command_runner.go:130] >       "repoTags": [
	I0731 11:13:58.756537  100669 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 11:13:58.756540  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756544  100669 command_runner.go:130] >       "repoDigests": [
	I0731 11:13:58.756553  100669 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 11:13:58.756571  100669 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 11:13:58.756583  100669 command_runner.go:130] >       ],
	I0731 11:13:58.756586  100669 command_runner.go:130] >       "size": "750414",
	I0731 11:13:58.756591  100669 command_runner.go:130] >       "uid": {
	I0731 11:13:58.756594  100669 command_runner.go:130] >         "value": "65535"
	I0731 11:13:58.756598  100669 command_runner.go:130] >       },
	I0731 11:13:58.756602  100669 command_runner.go:130] >       "username": "",
	I0731 11:13:58.756606  100669 command_runner.go:130] >       "spec": null,
	I0731 11:13:58.756610  100669 command_runner.go:130] >       "pinned": false
	I0731 11:13:58.756616  100669 command_runner.go:130] >     }
	I0731 11:13:58.756619  100669 command_runner.go:130] >   ]
	I0731 11:13:58.756623  100669 command_runner.go:130] > }
	I0731 11:13:58.756721  100669 crio.go:496] all images are preloaded for cri-o runtime.
	I0731 11:13:58.756731  100669 cache_images.go:84] Images are preloaded, skipping loading
	I0731 11:13:58.756783  100669 ssh_runner.go:195] Run: crio config
	I0731 11:13:58.794124  100669 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 11:13:58.794158  100669 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 11:13:58.794170  100669 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 11:13:58.794176  100669 command_runner.go:130] > #
	I0731 11:13:58.794189  100669 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 11:13:58.794200  100669 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 11:13:58.794212  100669 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 11:13:58.794236  100669 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 11:13:58.794244  100669 command_runner.go:130] > # reload'.
	I0731 11:13:58.794258  100669 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 11:13:58.794274  100669 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 11:13:58.794289  100669 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 11:13:58.794299  100669 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 11:13:58.794306  100669 command_runner.go:130] > [crio]
	I0731 11:13:58.794324  100669 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 11:13:58.794334  100669 command_runner.go:130] > # containers images, in this directory.
	I0731 11:13:58.794348  100669 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0731 11:13:58.794366  100669 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 11:13:58.794379  100669 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0731 11:13:58.794389  100669 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 11:13:58.794403  100669 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 11:13:58.794411  100669 command_runner.go:130] > # storage_driver = "vfs"
	I0731 11:13:58.794425  100669 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 11:13:58.794436  100669 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 11:13:58.794450  100669 command_runner.go:130] > # storage_option = [
	I0731 11:13:58.794457  100669 command_runner.go:130] > # ]
	I0731 11:13:58.794467  100669 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 11:13:58.794480  100669 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 11:13:58.794490  100669 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 11:13:58.794501  100669 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 11:13:58.794512  100669 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 11:13:58.794524  100669 command_runner.go:130] > # always happen on a node reboot
	I0731 11:13:58.794533  100669 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 11:13:58.794546  100669 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 11:13:58.794559  100669 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 11:13:58.794626  100669 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 11:13:58.794650  100669 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0731 11:13:58.794665  100669 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 11:13:58.794683  100669 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 11:13:58.794694  100669 command_runner.go:130] > # internal_wipe = true
	I0731 11:13:58.794703  100669 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 11:13:58.794716  100669 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 11:13:58.794734  100669 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 11:13:58.794748  100669 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 11:13:58.794770  100669 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 11:13:58.794782  100669 command_runner.go:130] > [crio.api]
	I0731 11:13:58.794792  100669 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 11:13:58.794805  100669 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 11:13:58.794816  100669 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 11:13:58.794829  100669 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 11:13:58.794844  100669 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 11:13:58.794857  100669 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 11:13:58.794867  100669 command_runner.go:130] > # stream_port = "0"
	I0731 11:13:58.794877  100669 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 11:13:58.794889  100669 command_runner.go:130] > # stream_enable_tls = false
	I0731 11:13:58.794904  100669 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 11:13:58.794916  100669 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 11:13:58.794928  100669 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 11:13:58.794943  100669 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 11:13:58.794951  100669 command_runner.go:130] > # minutes.
	I0731 11:13:58.794964  100669 command_runner.go:130] > # stream_tls_cert = ""
	I0731 11:13:58.794979  100669 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 11:13:58.794994  100669 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 11:13:58.795005  100669 command_runner.go:130] > # stream_tls_key = ""
	I0731 11:13:58.795019  100669 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 11:13:58.795034  100669 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 11:13:58.795048  100669 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 11:13:58.795056  100669 command_runner.go:130] > # stream_tls_ca = ""
	I0731 11:13:58.795072  100669 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 11:13:58.795084  100669 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0731 11:13:58.795102  100669 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 11:13:58.795113  100669 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0731 11:13:58.795186  100669 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 11:13:58.795204  100669 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 11:13:58.795212  100669 command_runner.go:130] > [crio.runtime]
	I0731 11:13:58.795223  100669 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 11:13:58.795239  100669 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 11:13:58.795247  100669 command_runner.go:130] > # "nofile=1024:2048"
	I0731 11:13:58.795262  100669 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 11:13:58.795274  100669 command_runner.go:130] > # default_ulimits = [
	I0731 11:13:58.795281  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795294  100669 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 11:13:58.795309  100669 command_runner.go:130] > # no_pivot = false
	I0731 11:13:58.795323  100669 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 11:13:58.795334  100669 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 11:13:58.795345  100669 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 11:13:58.795353  100669 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 11:13:58.795367  100669 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 11:13:58.795380  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 11:13:58.795386  100669 command_runner.go:130] > # conmon = ""
	I0731 11:13:58.795393  100669 command_runner.go:130] > # Cgroup setting for conmon
	I0731 11:13:58.795402  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 11:13:58.795408  100669 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 11:13:58.795417  100669 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 11:13:58.795424  100669 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 11:13:58.795433  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 11:13:58.795440  100669 command_runner.go:130] > # conmon_env = [
	I0731 11:13:58.795445  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795453  100669 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 11:13:58.795460  100669 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 11:13:58.795468  100669 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 11:13:58.795474  100669 command_runner.go:130] > # default_env = [
	I0731 11:13:58.795479  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795487  100669 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 11:13:58.795493  100669 command_runner.go:130] > # selinux = false
	I0731 11:13:58.795501  100669 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 11:13:58.795510  100669 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 11:13:58.795518  100669 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 11:13:58.795525  100669 command_runner.go:130] > # seccomp_profile = ""
	I0731 11:13:58.795542  100669 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 11:13:58.795553  100669 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 11:13:58.795563  100669 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 11:13:58.795572  100669 command_runner.go:130] > # which might increase security.
	I0731 11:13:58.795580  100669 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0731 11:13:58.795590  100669 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 11:13:58.795598  100669 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 11:13:58.795604  100669 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 11:13:58.795610  100669 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 11:13:58.795615  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:13:58.795619  100669 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 11:13:58.795628  100669 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 11:13:58.795632  100669 command_runner.go:130] > # the cgroup blockio controller.
	I0731 11:13:58.795636  100669 command_runner.go:130] > # blockio_config_file = ""
	I0731 11:13:58.795642  100669 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 11:13:58.795646  100669 command_runner.go:130] > # irqbalance daemon.
	I0731 11:13:58.795654  100669 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 11:13:58.795660  100669 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 11:13:58.795665  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:13:58.795669  100669 command_runner.go:130] > # rdt_config_file = ""
	I0731 11:13:58.795674  100669 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 11:13:58.795678  100669 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 11:13:58.795683  100669 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 11:13:58.795689  100669 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 11:13:58.795695  100669 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 11:13:58.795701  100669 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 11:13:58.795704  100669 command_runner.go:130] > # will be added.
	I0731 11:13:58.795708  100669 command_runner.go:130] > # default_capabilities = [
	I0731 11:13:58.795712  100669 command_runner.go:130] > # 	"CHOWN",
	I0731 11:13:58.795715  100669 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 11:13:58.795719  100669 command_runner.go:130] > # 	"FSETID",
	I0731 11:13:58.795722  100669 command_runner.go:130] > # 	"FOWNER",
	I0731 11:13:58.795725  100669 command_runner.go:130] > # 	"SETGID",
	I0731 11:13:58.795728  100669 command_runner.go:130] > # 	"SETUID",
	I0731 11:13:58.795732  100669 command_runner.go:130] > # 	"SETPCAP",
	I0731 11:13:58.795736  100669 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 11:13:58.795739  100669 command_runner.go:130] > # 	"KILL",
	I0731 11:13:58.795742  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795749  100669 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 11:13:58.795755  100669 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 11:13:58.795760  100669 command_runner.go:130] > # add_inheritable_capabilities = true
	I0731 11:13:58.795766  100669 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 11:13:58.795772  100669 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 11:13:58.795775  100669 command_runner.go:130] > # default_sysctls = [
	I0731 11:13:58.795779  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795783  100669 command_runner.go:130] > # List of devices on the host that a
	I0731 11:13:58.795789  100669 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 11:13:58.795793  100669 command_runner.go:130] > # allowed_devices = [
	I0731 11:13:58.795801  100669 command_runner.go:130] > # 	"/dev/fuse",
	I0731 11:13:58.795805  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795810  100669 command_runner.go:130] > # List of additional devices. specified as
	I0731 11:13:58.795860  100669 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 11:13:58.795866  100669 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 11:13:58.795872  100669 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 11:13:58.795899  100669 command_runner.go:130] > # additional_devices = [
	I0731 11:13:58.795906  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795914  100669 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 11:13:58.795920  100669 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 11:13:58.795927  100669 command_runner.go:130] > # 	"/etc/cdi",
	I0731 11:13:58.795936  100669 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 11:13:58.795942  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795953  100669 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 11:13:58.795961  100669 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 11:13:58.795966  100669 command_runner.go:130] > # Defaults to false.
	I0731 11:13:58.795970  100669 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 11:13:58.795976  100669 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 11:13:58.795982  100669 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 11:13:58.795986  100669 command_runner.go:130] > # hooks_dir = [
	I0731 11:13:58.795990  100669 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 11:13:58.795993  100669 command_runner.go:130] > # ]
	I0731 11:13:58.795999  100669 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 11:13:58.796005  100669 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 11:13:58.796009  100669 command_runner.go:130] > # its default mounts from the following two files:
	I0731 11:13:58.796012  100669 command_runner.go:130] > #
	I0731 11:13:58.796018  100669 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 11:13:58.796024  100669 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 11:13:58.796029  100669 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 11:13:58.796032  100669 command_runner.go:130] > #
	I0731 11:13:58.796038  100669 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 11:13:58.796043  100669 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 11:13:58.796049  100669 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 11:13:58.796054  100669 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 11:13:58.796057  100669 command_runner.go:130] > #
	I0731 11:13:58.796061  100669 command_runner.go:130] > # default_mounts_file = ""
	I0731 11:13:58.796066  100669 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 11:13:58.796072  100669 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 11:13:58.796075  100669 command_runner.go:130] > # pids_limit = 0
	I0731 11:13:58.796081  100669 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 11:13:58.796086  100669 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 11:13:58.796092  100669 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 11:13:58.796099  100669 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 11:13:58.796104  100669 command_runner.go:130] > # log_size_max = -1
	I0731 11:13:58.796111  100669 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0731 11:13:58.796118  100669 command_runner.go:130] > # log_to_journald = false
	I0731 11:13:58.796124  100669 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 11:13:58.796139  100669 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 11:13:58.796144  100669 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 11:13:58.796150  100669 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 11:13:58.796156  100669 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 11:13:58.796160  100669 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 11:13:58.796165  100669 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 11:13:58.796169  100669 command_runner.go:130] > # read_only = false
	I0731 11:13:58.796175  100669 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 11:13:58.796180  100669 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 11:13:58.796184  100669 command_runner.go:130] > # live configuration reload.
	I0731 11:13:58.796188  100669 command_runner.go:130] > # log_level = "info"
	I0731 11:13:58.796194  100669 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 11:13:58.796198  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:13:58.796202  100669 command_runner.go:130] > # log_filter = ""
	I0731 11:13:58.796208  100669 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 11:13:58.796213  100669 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 11:13:58.796217  100669 command_runner.go:130] > # separated by comma.
	I0731 11:13:58.796221  100669 command_runner.go:130] > # uid_mappings = ""
	I0731 11:13:58.796227  100669 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 11:13:58.796232  100669 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 11:13:58.796236  100669 command_runner.go:130] > # separated by comma.
	I0731 11:13:58.796240  100669 command_runner.go:130] > # gid_mappings = ""
	I0731 11:13:58.796246  100669 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 11:13:58.796251  100669 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 11:13:58.796257  100669 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 11:13:58.796260  100669 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 11:13:58.796266  100669 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 11:13:58.796272  100669 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 11:13:58.796279  100669 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 11:13:58.796283  100669 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 11:13:58.796289  100669 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 11:13:58.796294  100669 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 11:13:58.796300  100669 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 11:13:58.796303  100669 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 11:13:58.796309  100669 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 11:13:58.796341  100669 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 11:13:58.796351  100669 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 11:13:58.796356  100669 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 11:13:58.796360  100669 command_runner.go:130] > # drop_infra_ctr = true
	I0731 11:13:58.796366  100669 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 11:13:58.796378  100669 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 11:13:58.796390  100669 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 11:13:58.796397  100669 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 11:13:58.796406  100669 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 11:13:58.796415  100669 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 11:13:58.796422  100669 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 11:13:58.796439  100669 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 11:13:58.796444  100669 command_runner.go:130] > # pinns_path = ""
	I0731 11:13:58.796449  100669 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 11:13:58.796455  100669 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0731 11:13:58.796461  100669 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0731 11:13:58.796465  100669 command_runner.go:130] > # default_runtime = "runc"
	I0731 11:13:58.796470  100669 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 11:13:58.796476  100669 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 11:13:58.796485  100669 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0731 11:13:58.796490  100669 command_runner.go:130] > # creation as a file is not desired either.
	I0731 11:13:58.796497  100669 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 11:13:58.796502  100669 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 11:13:58.796506  100669 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 11:13:58.796509  100669 command_runner.go:130] > # ]
	I0731 11:13:58.796516  100669 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 11:13:58.796521  100669 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 11:13:58.796527  100669 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0731 11:13:58.796533  100669 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0731 11:13:58.796547  100669 command_runner.go:130] > #
	I0731 11:13:58.796552  100669 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0731 11:13:58.796556  100669 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0731 11:13:58.796560  100669 command_runner.go:130] > #  runtime_type = "oci"
	I0731 11:13:58.796564  100669 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0731 11:13:58.796569  100669 command_runner.go:130] > #  privileged_without_host_devices = false
	I0731 11:13:58.796573  100669 command_runner.go:130] > #  allowed_annotations = []
	I0731 11:13:58.796577  100669 command_runner.go:130] > # Where:
	I0731 11:13:58.796583  100669 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0731 11:13:58.796593  100669 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0731 11:13:58.796599  100669 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 11:13:58.796606  100669 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 11:13:58.796611  100669 command_runner.go:130] > #   in $PATH.
	I0731 11:13:58.796617  100669 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0731 11:13:58.796621  100669 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 11:13:58.796629  100669 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0731 11:13:58.796632  100669 command_runner.go:130] > #   state.
	I0731 11:13:58.796638  100669 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 11:13:58.796644  100669 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 11:13:58.796649  100669 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 11:13:58.796655  100669 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 11:13:58.796661  100669 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 11:13:58.796667  100669 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 11:13:58.796672  100669 command_runner.go:130] > #   The currently recognized values are:
	I0731 11:13:58.796681  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 11:13:58.796691  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 11:13:58.796697  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 11:13:58.796703  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 11:13:58.796710  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 11:13:58.796716  100669 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 11:13:58.796721  100669 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 11:13:58.796727  100669 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0731 11:13:58.796732  100669 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 11:13:58.796736  100669 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 11:13:58.796741  100669 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0731 11:13:58.796745  100669 command_runner.go:130] > runtime_type = "oci"
	I0731 11:13:58.796749  100669 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 11:13:58.796753  100669 command_runner.go:130] > runtime_config_path = ""
	I0731 11:13:58.796756  100669 command_runner.go:130] > monitor_path = ""
	I0731 11:13:58.796760  100669 command_runner.go:130] > monitor_cgroup = ""
	I0731 11:13:58.796764  100669 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 11:13:58.796857  100669 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0731 11:13:58.796878  100669 command_runner.go:130] > # running containers
	I0731 11:13:58.796885  100669 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0731 11:13:58.796898  100669 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0731 11:13:58.796917  100669 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0731 11:13:58.796930  100669 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0731 11:13:58.796945  100669 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0731 11:13:58.796953  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0731 11:13:58.796961  100669 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0731 11:13:58.796972  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0731 11:13:58.796978  100669 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0731 11:13:58.796984  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0731 11:13:58.796990  100669 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 11:13:58.796997  100669 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 11:13:58.797004  100669 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 11:13:58.797014  100669 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 11:13:58.797025  100669 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 11:13:58.797031  100669 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 11:13:58.797041  100669 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 11:13:58.797056  100669 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 11:13:58.797070  100669 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 11:13:58.797083  100669 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 11:13:58.797094  100669 command_runner.go:130] > # Example:
	I0731 11:13:58.797106  100669 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 11:13:58.797119  100669 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 11:13:58.797128  100669 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 11:13:58.797135  100669 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 11:13:58.797139  100669 command_runner.go:130] > # cpuset = 0
	I0731 11:13:58.797146  100669 command_runner.go:130] > # cpushares = "0-1"
	I0731 11:13:58.797149  100669 command_runner.go:130] > # Where:
	I0731 11:13:58.797155  100669 command_runner.go:130] > # The workload name is workload-type.
	I0731 11:13:58.797164  100669 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 11:13:58.797170  100669 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 11:13:58.797178  100669 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 11:13:58.797188  100669 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 11:13:58.797196  100669 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 11:13:58.797202  100669 command_runner.go:130] > # 
	I0731 11:13:58.797209  100669 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 11:13:58.797215  100669 command_runner.go:130] > #
	I0731 11:13:58.797226  100669 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 11:13:58.797235  100669 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 11:13:58.797244  100669 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 11:13:58.797253  100669 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 11:13:58.797261  100669 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 11:13:58.797267  100669 command_runner.go:130] > [crio.image]
	I0731 11:13:58.797274  100669 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 11:13:58.797281  100669 command_runner.go:130] > # default_transport = "docker://"
	I0731 11:13:58.797287  100669 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 11:13:58.797296  100669 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 11:13:58.797303  100669 command_runner.go:130] > # global_auth_file = ""
	I0731 11:13:58.797308  100669 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 11:13:58.797315  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:13:58.797323  100669 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0731 11:13:58.797329  100669 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 11:13:58.797337  100669 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 11:13:58.797345  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:13:58.797353  100669 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 11:13:58.797361  100669 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 11:13:58.797370  100669 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 11:13:58.797379  100669 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 11:13:58.797388  100669 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 11:13:58.797395  100669 command_runner.go:130] > # pause_command = "/pause"
	I0731 11:13:58.797401  100669 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 11:13:58.797409  100669 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 11:13:58.797416  100669 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 11:13:58.797424  100669 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 11:13:58.797432  100669 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 11:13:58.797439  100669 command_runner.go:130] > # signature_policy = ""
	I0731 11:13:58.797452  100669 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 11:13:58.797461  100669 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 11:13:58.797468  100669 command_runner.go:130] > # changing them here.
	I0731 11:13:58.797473  100669 command_runner.go:130] > # insecure_registries = [
	I0731 11:13:58.797479  100669 command_runner.go:130] > # ]
	I0731 11:13:58.797485  100669 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 11:13:58.797493  100669 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 11:13:58.797506  100669 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 11:13:58.797515  100669 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 11:13:58.797522  100669 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 11:13:58.797529  100669 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 11:13:58.797540  100669 command_runner.go:130] > # CNI plugins.
	I0731 11:13:58.797547  100669 command_runner.go:130] > [crio.network]
	I0731 11:13:58.797553  100669 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 11:13:58.797564  100669 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 11:13:58.797573  100669 command_runner.go:130] > # cni_default_network = ""
	I0731 11:13:58.797582  100669 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 11:13:58.797586  100669 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 11:13:58.797594  100669 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 11:13:58.797601  100669 command_runner.go:130] > # plugin_dirs = [
	I0731 11:13:58.797605  100669 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 11:13:58.797611  100669 command_runner.go:130] > # ]
	I0731 11:13:58.797617  100669 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 11:13:58.797623  100669 command_runner.go:130] > [crio.metrics]
	I0731 11:13:58.797628  100669 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 11:13:58.797638  100669 command_runner.go:130] > # enable_metrics = false
	I0731 11:13:58.797646  100669 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 11:13:58.797653  100669 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 11:13:58.797662  100669 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 11:13:58.797670  100669 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 11:13:58.797677  100669 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 11:13:58.797683  100669 command_runner.go:130] > # metrics_collectors = [
	I0731 11:13:58.797688  100669 command_runner.go:130] > # 	"operations",
	I0731 11:13:58.797695  100669 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 11:13:58.797700  100669 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 11:13:58.797706  100669 command_runner.go:130] > # 	"operations_errors",
	I0731 11:13:58.797711  100669 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 11:13:58.797718  100669 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 11:13:58.797723  100669 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 11:13:58.797729  100669 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 11:13:58.797734  100669 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 11:13:58.797740  100669 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 11:13:58.797745  100669 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 11:13:58.797754  100669 command_runner.go:130] > # 	"containers_oom_total",
	I0731 11:13:58.797759  100669 command_runner.go:130] > # 	"containers_oom",
	I0731 11:13:58.797766  100669 command_runner.go:130] > # 	"processes_defunct",
	I0731 11:13:58.797770  100669 command_runner.go:130] > # 	"operations_total",
	I0731 11:13:58.797777  100669 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 11:13:58.797782  100669 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 11:13:58.797789  100669 command_runner.go:130] > # 	"operations_errors_total",
	I0731 11:13:58.797794  100669 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 11:13:58.797801  100669 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 11:13:58.797809  100669 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 11:13:58.797816  100669 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 11:13:58.797821  100669 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 11:13:58.797829  100669 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 11:13:58.797835  100669 command_runner.go:130] > # ]
	I0731 11:13:58.797841  100669 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 11:13:58.797847  100669 command_runner.go:130] > # metrics_port = 9090
	I0731 11:13:58.797852  100669 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 11:13:58.797859  100669 command_runner.go:130] > # metrics_socket = ""
	I0731 11:13:58.797866  100669 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 11:13:58.797876  100669 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 11:13:58.797884  100669 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 11:13:58.797892  100669 command_runner.go:130] > # certificate on any modification event.
	I0731 11:13:58.797899  100669 command_runner.go:130] > # metrics_cert = ""
	I0731 11:13:58.797904  100669 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 11:13:58.797912  100669 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 11:13:58.797917  100669 command_runner.go:130] > # metrics_key = ""
	I0731 11:13:58.797923  100669 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 11:13:58.797929  100669 command_runner.go:130] > [crio.tracing]
	I0731 11:13:58.797934  100669 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 11:13:58.797941  100669 command_runner.go:130] > # enable_tracing = false
	I0731 11:13:58.797947  100669 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 11:13:58.797954  100669 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 11:13:58.797959  100669 command_runner.go:130] > # Number of samples to collect per million spans.
	I0731 11:13:58.797967  100669 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 11:13:58.797975  100669 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 11:13:58.797982  100669 command_runner.go:130] > [crio.stats]
	I0731 11:13:58.797991  100669 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 11:13:58.798000  100669 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 11:13:58.798004  100669 command_runner.go:130] > # stats_collection_period = 0
	I0731 11:13:58.798030  100669 command_runner.go:130] ! time="2023-07-31 11:13:58.791770313Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0731 11:13:58.798048  100669 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 11:13:58.798133  100669 cni.go:84] Creating CNI manager for ""
	I0731 11:13:58.798148  100669 cni.go:136] 1 nodes found, recommending kindnet
	I0731 11:13:58.798157  100669 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:13:58.798175  100669 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-249026 NodeName:multinode-249026 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 11:13:58.798311  100669 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-249026"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 11:13:58.798371  100669 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-249026 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:13:58.798419  100669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 11:13:58.805644  100669 command_runner.go:130] > kubeadm
	I0731 11:13:58.805661  100669 command_runner.go:130] > kubectl
	I0731 11:13:58.805667  100669 command_runner.go:130] > kubelet
	I0731 11:13:58.806281  100669 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:13:58.806345  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 11:13:58.814022  100669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0731 11:13:58.828834  100669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 11:13:58.843940  100669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0731 11:13:58.858790  100669 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:13:58.861777  100669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:13:58.870768  100669 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026 for IP: 192.168.58.2
	I0731 11:13:58.870803  100669 certs.go:190] acquiring lock for shared ca certs: {Name:mkc3a3f248dbae88fa439f539f826d6e08b37eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:58.870953  100669 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key
	I0731 11:13:58.870992  100669 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key
	I0731 11:13:58.871034  100669 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key
	I0731 11:13:58.871050  100669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt with IP's: []
	I0731 11:13:58.963294  100669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt ...
	I0731 11:13:58.963323  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt: {Name:mk7ac6d11b8212a8cce35d701943f2e2278ab2ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:58.963482  100669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key ...
	I0731 11:13:58.963493  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key: {Name:mk563372192c66700283e4ff65c877e8f7876a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:58.963560  100669 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key.cee25041
	I0731 11:13:58.963574  100669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0731 11:13:59.047828  100669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt.cee25041 ...
	I0731 11:13:59.047858  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt.cee25041: {Name:mk4afefe852eed10d47ea1f8c47f86efe5596566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:59.048052  100669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key.cee25041 ...
	I0731 11:13:59.048074  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key.cee25041: {Name:mkccbfd5ae2be82d8968b9128449406cec9a4c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:59.048168  100669 certs.go:337] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt
	I0731 11:13:59.048257  100669 certs.go:341] copying /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key
	I0731 11:13:59.048341  100669 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.key
	I0731 11:13:59.048359  100669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.crt with IP's: []
	I0731 11:13:59.120396  100669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.crt ...
	I0731 11:13:59.120426  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.crt: {Name:mkde5f4fb7bb73b173e2047861b27372241855de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:59.120625  100669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.key ...
	I0731 11:13:59.120644  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.key: {Name:mkf76a0e10ec799e99fc417e2e4b037a9e9dbb6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:13:59.120741  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 11:13:59.120767  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 11:13:59.120787  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 11:13:59.120804  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 11:13:59.120821  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 11:13:59.120834  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 11:13:59.120848  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 11:13:59.120867  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 11:13:59.120932  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem (1338 bytes)
	W0731 11:13:59.120980  100669 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646_empty.pem, impossibly tiny 0 bytes
	I0731 11:13:59.120998  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 11:13:59.121032  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:13:59.121067  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:13:59.121111  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem (1675 bytes)
	I0731 11:13:59.121167  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:13:59.121201  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /usr/share/ca-certificates/156462.pem
	I0731 11:13:59.121223  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:59.121240  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem -> /usr/share/ca-certificates/15646.pem
	I0731 11:13:59.121766  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0731 11:13:59.143240  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 11:13:59.163579  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 11:13:59.183598  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 11:13:59.203478  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:13:59.223361  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 11:13:59.243501  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:13:59.263269  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 11:13:59.283010  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /usr/share/ca-certificates/156462.pem (1708 bytes)
	I0731 11:13:59.303336  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:13:59.323171  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem --> /usr/share/ca-certificates/15646.pem (1338 bytes)
	I0731 11:13:59.342864  100669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 11:13:59.357534  100669 ssh_runner.go:195] Run: openssl version
	I0731 11:13:59.362155  100669 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0731 11:13:59.362304  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156462.pem && ln -fs /usr/share/ca-certificates/156462.pem /etc/ssl/certs/156462.pem"
	I0731 11:13:59.370073  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156462.pem
	I0731 11:13:59.373020  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 11:01 /usr/share/ca-certificates/156462.pem
	I0731 11:13:59.373043  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:01 /usr/share/ca-certificates/156462.pem
	I0731 11:13:59.373084  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156462.pem
	I0731 11:13:59.378924  100669 command_runner.go:130] > 3ec20f2e
	I0731 11:13:59.378983  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156462.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 11:13:59.386795  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:13:59.394885  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:59.397962  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:59.397999  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:59.398037  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:13:59.403804  100669 command_runner.go:130] > b5213941
	I0731 11:13:59.404018  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:13:59.411946  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15646.pem && ln -fs /usr/share/ca-certificates/15646.pem /etc/ssl/certs/15646.pem"
	I0731 11:13:59.419847  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15646.pem
	I0731 11:13:59.422809  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 11:01 /usr/share/ca-certificates/15646.pem
	I0731 11:13:59.422839  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:01 /usr/share/ca-certificates/15646.pem
	I0731 11:13:59.422867  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15646.pem
	I0731 11:13:59.428764  100669 command_runner.go:130] > 51391683
	I0731 11:13:59.428826  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15646.pem /etc/ssl/certs/51391683.0"
	I0731 11:13:59.436801  100669 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 11:13:59.439524  100669 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:13:59.439593  100669 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:13:59.439644  100669 kubeadm.go:404] StartCluster: {Name:multinode-249026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:13:59.439723  100669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 11:13:59.439770  100669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 11:13:59.471597  100669 cri.go:89] found id: ""
	I0731 11:13:59.471667  100669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 11:13:59.478733  100669 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0731 11:13:59.478751  100669 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0731 11:13:59.478757  100669 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0731 11:13:59.479492  100669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 11:13:59.487235  100669 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0731 11:13:59.487279  100669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 11:13:59.494071  100669 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0731 11:13:59.494090  100669 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0731 11:13:59.494097  100669 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0731 11:13:59.494106  100669 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:13:59.494727  100669 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 11:13:59.494762  100669 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0731 11:13:59.536523  100669 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0731 11:13:59.536557  100669 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0731 11:13:59.536606  100669 kubeadm.go:322] [preflight] Running pre-flight checks
	I0731 11:13:59.536630  100669 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 11:13:59.570345  100669 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:13:59.570369  100669 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:13:59.570437  100669 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1038-gcp
	I0731 11:13:59.570450  100669 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1038-gcp
	I0731 11:13:59.570493  100669 kubeadm.go:322] OS: Linux
	I0731 11:13:59.570500  100669 command_runner.go:130] > OS: Linux
	I0731 11:13:59.570537  100669 kubeadm.go:322] CGROUPS_CPU: enabled
	I0731 11:13:59.570563  100669 command_runner.go:130] > CGROUPS_CPU: enabled
	I0731 11:13:59.570640  100669 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0731 11:13:59.570655  100669 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0731 11:13:59.570715  100669 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0731 11:13:59.570725  100669 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0731 11:13:59.570798  100669 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0731 11:13:59.570809  100669 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0731 11:13:59.570873  100669 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0731 11:13:59.570884  100669 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0731 11:13:59.570949  100669 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0731 11:13:59.570957  100669 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0731 11:13:59.570992  100669 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0731 11:13:59.570998  100669 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0731 11:13:59.571040  100669 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0731 11:13:59.571047  100669 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0731 11:13:59.571083  100669 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0731 11:13:59.571089  100669 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0731 11:13:59.630617  100669 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 11:13:59.630640  100669 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 11:13:59.630794  100669 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 11:13:59.630815  100669 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 11:13:59.630943  100669 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 11:13:59.630955  100669 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 11:13:59.817278  100669 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 11:13:59.821166  100669 out.go:204]   - Generating certificates and keys ...
	I0731 11:13:59.817354  100669 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 11:13:59.821320  100669 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0731 11:13:59.821339  100669 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0731 11:13:59.821435  100669 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0731 11:13:59.821446  100669 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0731 11:13:59.921205  100669 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 11:13:59.921232  100669 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 11:14:00.320386  100669 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0731 11:14:00.320407  100669 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0731 11:14:00.576761  100669 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0731 11:14:00.576801  100669 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0731 11:14:00.673107  100669 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0731 11:14:00.673136  100669 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0731 11:14:00.988926  100669 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0731 11:14:00.988952  100669 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0731 11:14:00.989071  100669 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-249026] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 11:14:00.989081  100669 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-249026] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 11:14:01.099438  100669 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0731 11:14:01.099473  100669 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0731 11:14:01.099582  100669 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-249026] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 11:14:01.099593  100669 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-249026] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0731 11:14:01.183547  100669 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 11:14:01.183576  100669 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 11:14:01.334488  100669 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 11:14:01.334534  100669 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 11:14:01.472331  100669 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0731 11:14:01.472352  100669 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0731 11:14:01.472413  100669 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 11:14:01.472424  100669 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 11:14:01.686196  100669 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 11:14:01.686225  100669 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 11:14:02.063840  100669 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 11:14:02.063863  100669 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 11:14:02.203153  100669 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 11:14:02.203182  100669 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 11:14:02.418766  100669 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 11:14:02.418793  100669 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 11:14:02.426635  100669 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:14:02.426682  100669 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:14:02.427548  100669 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:14:02.427567  100669 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:14:02.427635  100669 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0731 11:14:02.427649  100669 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 11:14:02.499838  100669 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 11:14:02.501998  100669 out.go:204]   - Booting up control plane ...
	I0731 11:14:02.499901  100669 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 11:14:02.502229  100669 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 11:14:02.502231  100669 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 11:14:02.504220  100669 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 11:14:02.504241  100669 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 11:14:02.505198  100669 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 11:14:02.505220  100669 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 11:14:02.505965  100669 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 11:14:02.505982  100669 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 11:14:02.508103  100669 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 11:14:02.508123  100669 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 11:14:08.010265  100669 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502136 seconds
	I0731 11:14:08.010292  100669 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.502136 seconds
	I0731 11:14:08.010450  100669 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 11:14:08.010463  100669 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 11:14:08.022400  100669 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 11:14:08.022413  100669 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 11:14:08.541069  100669 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 11:14:08.541090  100669 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0731 11:14:08.541283  100669 kubeadm.go:322] [mark-control-plane] Marking the node multinode-249026 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 11:14:08.541292  100669 command_runner.go:130] > [mark-control-plane] Marking the node multinode-249026 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 11:14:09.051276  100669 kubeadm.go:322] [bootstrap-token] Using token: nu28vy.6yzt38vsl7lzdebj
	I0731 11:14:09.052982  100669 out.go:204]   - Configuring RBAC rules ...
	I0731 11:14:09.051339  100669 command_runner.go:130] > [bootstrap-token] Using token: nu28vy.6yzt38vsl7lzdebj
	I0731 11:14:09.053111  100669 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 11:14:09.053123  100669 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 11:14:09.056895  100669 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 11:14:09.056914  100669 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 11:14:09.063384  100669 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 11:14:09.063415  100669 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 11:14:09.067166  100669 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 11:14:09.067188  100669 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 11:14:09.069867  100669 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 11:14:09.069882  100669 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 11:14:09.072716  100669 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 11:14:09.072734  100669 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 11:14:09.083059  100669 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 11:14:09.083079  100669 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 11:14:09.297696  100669 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0731 11:14:09.297724  100669 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0731 11:14:09.460808  100669 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0731 11:14:09.460836  100669 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0731 11:14:09.461721  100669 kubeadm.go:322] 
	I0731 11:14:09.461825  100669 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0731 11:14:09.461840  100669 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0731 11:14:09.461854  100669 kubeadm.go:322] 
	I0731 11:14:09.461971  100669 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0731 11:14:09.461993  100669 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0731 11:14:09.462019  100669 kubeadm.go:322] 
	I0731 11:14:09.462051  100669 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0731 11:14:09.462061  100669 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0731 11:14:09.462155  100669 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 11:14:09.462169  100669 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 11:14:09.462254  100669 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 11:14:09.462265  100669 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 11:14:09.462273  100669 kubeadm.go:322] 
	I0731 11:14:09.462351  100669 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0731 11:14:09.462362  100669 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0731 11:14:09.462367  100669 kubeadm.go:322] 
	I0731 11:14:09.462428  100669 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 11:14:09.462438  100669 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 11:14:09.462444  100669 kubeadm.go:322] 
	I0731 11:14:09.462513  100669 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0731 11:14:09.462520  100669 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0731 11:14:09.462628  100669 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 11:14:09.462638  100669 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 11:14:09.462731  100669 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 11:14:09.462752  100669 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 11:14:09.462758  100669 kubeadm.go:322] 
	I0731 11:14:09.462896  100669 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 11:14:09.462913  100669 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0731 11:14:09.463020  100669 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0731 11:14:09.463033  100669 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0731 11:14:09.463042  100669 kubeadm.go:322] 
	I0731 11:14:09.463172  100669 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nu28vy.6yzt38vsl7lzdebj \
	I0731 11:14:09.463184  100669 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token nu28vy.6yzt38vsl7lzdebj \
	I0731 11:14:09.463334  100669 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd \
	I0731 11:14:09.463345  100669 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd \
	I0731 11:14:09.463374  100669 kubeadm.go:322] 	--control-plane 
	I0731 11:14:09.463383  100669 command_runner.go:130] > 	--control-plane 
	I0731 11:14:09.463389  100669 kubeadm.go:322] 
	I0731 11:14:09.463514  100669 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0731 11:14:09.463527  100669 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0731 11:14:09.463533  100669 kubeadm.go:322] 
	I0731 11:14:09.463668  100669 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nu28vy.6yzt38vsl7lzdebj \
	I0731 11:14:09.463678  100669 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nu28vy.6yzt38vsl7lzdebj \
	I0731 11:14:09.463825  100669 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd 
	I0731 11:14:09.463835  100669 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd 
	I0731 11:14:09.465279  100669 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0731 11:14:09.465292  100669 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0731 11:14:09.465409  100669 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:14:09.465424  100669 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:14:09.465442  100669 cni.go:84] Creating CNI manager for ""
	I0731 11:14:09.465459  100669 cni.go:136] 1 nodes found, recommending kindnet
	I0731 11:14:09.468281  100669 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 11:14:09.469815  100669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 11:14:09.473459  100669 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 11:14:09.473479  100669 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0731 11:14:09.473494  100669 command_runner.go:130] > Device: 37h/55d	Inode: 556174      Links: 1
	I0731 11:14:09.473507  100669 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:14:09.473523  100669 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0731 11:14:09.473535  100669 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0731 11:14:09.473547  100669 command_runner.go:130] > Change: 2023-07-31 10:55:52.654716471 +0000
	I0731 11:14:09.473559  100669 command_runner.go:130] >  Birth: 2023-07-31 10:55:52.630714145 +0000
	I0731 11:14:09.473620  100669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 11:14:09.473631  100669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 11:14:09.536563  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 11:14:10.204006  100669 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0731 11:14:10.208881  100669 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0731 11:14:10.215285  100669 command_runner.go:130] > serviceaccount/kindnet created
	I0731 11:14:10.224961  100669 command_runner.go:130] > daemonset.apps/kindnet created
	I0731 11:14:10.228707  100669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 11:14:10.228822  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.1 minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35 minikube.k8s.io/name=multinode-249026 minikube.k8s.io/updated_at=2023_07_31T11_14_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:10.228840  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:10.236228  100669 command_runner.go:130] > -16
	I0731 11:14:10.236321  100669 ops.go:34] apiserver oom_adj: -16
	I0731 11:14:10.293090  100669 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0731 11:14:10.296986  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:10.340972  100669 command_runner.go:130] > node/multinode-249026 labeled
	I0731 11:14:10.388796  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:10.391554  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:10.457222  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:10.957999  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:11.022119  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:11.457695  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:11.522419  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:11.958222  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:12.022169  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:12.457723  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:12.521509  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:12.958104  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:13.021992  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:13.457584  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:13.519415  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:13.957431  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:14.021549  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:14.458181  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:14.520206  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:14.958130  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:15.020921  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:15.458121  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:15.520301  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:15.958125  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:16.023397  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:16.457977  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:16.522707  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:16.958338  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:17.026408  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:17.457713  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:17.518208  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:17.957403  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:18.019428  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:18.458336  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:18.522259  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:18.957935  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:19.024238  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:19.457795  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:19.522749  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:19.958374  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:20.020889  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:20.457582  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:20.522210  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:20.957731  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:21.019303  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:21.458176  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:21.522546  100669 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0731 11:14:21.958353  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 11:14:22.037658  100669 command_runner.go:130] > NAME      SECRETS   AGE
	I0731 11:14:22.037689  100669 command_runner.go:130] > default   0         1s
	I0731 11:14:22.040699  100669 kubeadm.go:1081] duration metric: took 11.811899619s to wait for elevateKubeSystemPrivileges.
	I0731 11:14:22.040729  100669 kubeadm.go:406] StartCluster complete in 22.601088408s
	I0731 11:14:22.040750  100669 settings.go:142] acquiring lock: {Name:mk56cd859b72e4589e0c5d99bc981c97b4dc2ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:14:22.040832  100669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:14:22.041621  100669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16968-8855/kubeconfig: {Name:mk53977df3b191de084093522567bbafd77b3df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:14:22.041932  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 11:14:22.042021  100669 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0731 11:14:22.042114  100669 addons.go:69] Setting storage-provisioner=true in profile "multinode-249026"
	I0731 11:14:22.042135  100669 addons.go:231] Setting addon storage-provisioner=true in "multinode-249026"
	I0731 11:14:22.042142  100669 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:14:22.042194  100669 host.go:66] Checking if "multinode-249026" exists ...
	I0731 11:14:22.042185  100669 addons.go:69] Setting default-storageclass=true in profile "multinode-249026"
	I0731 11:14:22.042206  100669 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:14:22.042219  100669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-249026"
	I0731 11:14:22.042487  100669 kapi.go:59] client config for multinode-249026: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:14:22.042599  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:14:22.042683  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:14:22.043399  100669 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 11:14:22.043622  100669 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 11:14:22.043636  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.043648  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.043658  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.058373  100669 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 11:14:22.058397  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.058415  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.058424  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.058433  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.058441  100669 round_trippers.go:580]     Content-Length: 291
	I0731 11:14:22.058451  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.058467  100669 round_trippers.go:580]     Audit-Id: 6bfd6929-eed4-41cc-82e6-5341e225b425
	I0731 11:14:22.058477  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.058513  100669 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf2497d0-8d01-4f65-8f7c-13691a19b413","resourceVersion":"353","creationTimestamp":"2023-07-31T11:14:09Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 11:14:22.058998  100669 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf2497d0-8d01-4f65-8f7c-13691a19b413","resourceVersion":"353","creationTimestamp":"2023-07-31T11:14:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 11:14:22.059055  100669 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 11:14:22.059067  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.059078  100669 round_trippers.go:473]     Content-Type: application/json
	I0731 11:14:22.059087  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.059096  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.066349  100669 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:14:22.068119  100669 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:14:22.068142  100669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 11:14:22.068199  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:14:22.068281  100669 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:14:22.067870  100669 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 11:14:22.068409  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.068427  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.068441  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.068451  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.068468  100669 round_trippers.go:580]     Content-Length: 291
	I0731 11:14:22.068481  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.068495  100669 round_trippers.go:580]     Audit-Id: 5a96146c-1896-4cd1-8b7d-dba0cd00f9ad
	I0731 11:14:22.068507  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.068541  100669 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf2497d0-8d01-4f65-8f7c-13691a19b413","resourceVersion":"354","creationTimestamp":"2023-07-31T11:14:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 11:14:22.068673  100669 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 11:14:22.068686  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.068696  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.068617  100669 kapi.go:59] client config for multinode-249026: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:14:22.068705  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.069057  100669 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 11:14:22.069070  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.069082  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.069092  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.071014  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:22.071035  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.071045  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.071055  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.071064  100669 round_trippers.go:580]     Content-Length: 291
	I0731 11:14:22.071074  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.071093  100669 round_trippers.go:580]     Audit-Id: 2f92c151-b051-4817-92c8-ce15c7acd919
	I0731 11:14:22.071106  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.071115  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.071137  100669 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf2497d0-8d01-4f65-8f7c-13691a19b413","resourceVersion":"354","creationTimestamp":"2023-07-31T11:14:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0731 11:14:22.071226  100669 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-249026" context rescaled to 1 replicas
	I0731 11:14:22.071259  100669 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 11:14:22.073252  100669 out.go:177] * Verifying Kubernetes components...
	I0731 11:14:22.071511  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:22.075278  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.075294  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.075304  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.075312  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.075320  100669 round_trippers.go:580]     Content-Length: 109
	I0731 11:14:22.075322  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:14:22.075328  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.075338  100669 round_trippers.go:580]     Audit-Id: 619b5269-56a6-453f-a4c4-eb2374fecbe3
	I0731 11:14:22.075347  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.075370  100669 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"354"},"items":[]}
	I0731 11:14:22.075665  100669 addons.go:231] Setting addon default-storageclass=true in "multinode-249026"
	I0731 11:14:22.075706  100669 host.go:66] Checking if "multinode-249026" exists ...
	I0731 11:14:22.076202  100669 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:14:22.093436  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:14:22.104553  100669 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 11:14:22.104575  100669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 11:14:22.104618  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:14:22.120070  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:14:22.163157  100669 command_runner.go:130] > apiVersion: v1
	I0731 11:14:22.163177  100669 command_runner.go:130] > data:
	I0731 11:14:22.163184  100669 command_runner.go:130] >   Corefile: |
	I0731 11:14:22.163190  100669 command_runner.go:130] >     .:53 {
	I0731 11:14:22.163196  100669 command_runner.go:130] >         errors
	I0731 11:14:22.163204  100669 command_runner.go:130] >         health {
	I0731 11:14:22.163218  100669 command_runner.go:130] >            lameduck 5s
	I0731 11:14:22.163224  100669 command_runner.go:130] >         }
	I0731 11:14:22.163230  100669 command_runner.go:130] >         ready
	I0731 11:14:22.163246  100669 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0731 11:14:22.163253  100669 command_runner.go:130] >            pods insecure
	I0731 11:14:22.163262  100669 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0731 11:14:22.163269  100669 command_runner.go:130] >            ttl 30
	I0731 11:14:22.163275  100669 command_runner.go:130] >         }
	I0731 11:14:22.163282  100669 command_runner.go:130] >         prometheus :9153
	I0731 11:14:22.163290  100669 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0731 11:14:22.163297  100669 command_runner.go:130] >            max_concurrent 1000
	I0731 11:14:22.163303  100669 command_runner.go:130] >         }
	I0731 11:14:22.163310  100669 command_runner.go:130] >         cache 30
	I0731 11:14:22.163316  100669 command_runner.go:130] >         loop
	I0731 11:14:22.163322  100669 command_runner.go:130] >         reload
	I0731 11:14:22.163331  100669 command_runner.go:130] >         loadbalance
	I0731 11:14:22.163337  100669 command_runner.go:130] >     }
	I0731 11:14:22.163343  100669 command_runner.go:130] > kind: ConfigMap
	I0731 11:14:22.163349  100669 command_runner.go:130] > metadata:
	I0731 11:14:22.163359  100669 command_runner.go:130] >   creationTimestamp: "2023-07-31T11:14:09Z"
	I0731 11:14:22.163366  100669 command_runner.go:130] >   name: coredns
	I0731 11:14:22.163373  100669 command_runner.go:130] >   namespace: kube-system
	I0731 11:14:22.163381  100669 command_runner.go:130] >   resourceVersion: "230"
	I0731 11:14:22.163388  100669 command_runner.go:130] >   uid: f6599a71-b4c1-4dac-ab37-ee4d93fee3af
	I0731 11:14:22.166300  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 11:14:22.166518  100669 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:14:22.166750  100669 kapi.go:59] client config for multinode-249026: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:14:22.167029  100669 node_ready.go:35] waiting up to 6m0s for node "multinode-249026" to be "Ready" ...
	I0731 11:14:22.167107  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:22.167118  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.167129  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.167141  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.169445  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:22.169464  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.169475  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.169484  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.169492  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.169501  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.169515  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.169522  100669 round_trippers.go:580]     Audit-Id: 708c327f-726d-4ecf-a3c3-160bf7c40ac4
	I0731 11:14:22.169626  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:22.170173  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:22.170189  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.170204  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.170213  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.236754  100669 round_trippers.go:574] Response Status: 200 OK in 66 milliseconds
	I0731 11:14:22.236781  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.236792  100669 round_trippers.go:580]     Audit-Id: 9298afdb-90b1-403f-af45-7d0b0e695984
	I0731 11:14:22.236799  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.236807  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.236819  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.236829  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.236844  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.237000  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:22.353948  100669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 11:14:22.355040  100669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 11:14:22.738366  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:22.738391  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:22.738402  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:22.738421  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:22.745479  100669 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 11:14:22.745506  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:22.745517  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:22.745527  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:22 GMT
	I0731 11:14:22.745537  100669 round_trippers.go:580]     Audit-Id: e8cbc2fa-25a7-4f54-9bc4-12c446c77dca
	I0731 11:14:22.745546  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:22.745556  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:22.745565  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:22.746156  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:22.941774  100669 command_runner.go:130] > configmap/coredns replaced
	I0731 11:14:22.941805  100669 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0731 11:14:23.181869  100669 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0731 11:14:23.186751  100669 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0731 11:14:23.194471  100669 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 11:14:23.200563  100669 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0731 11:14:23.206468  100669 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0731 11:14:23.214815  100669 command_runner.go:130] > pod/storage-provisioner created
	I0731 11:14:23.218923  100669 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0731 11:14:23.222023  100669 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 11:14:23.223638  100669 addons.go:502] enable addons completed in 1.181617216s: enabled=[storage-provisioner default-storageclass]
	I0731 11:14:23.238162  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:23.238176  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:23.238186  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:23.238229  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:23.240304  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:23.240324  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:23.240334  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:23.240343  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:23.240351  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:23 GMT
	I0731 11:14:23.240365  100669 round_trippers.go:580]     Audit-Id: 9214db4c-7bb7-45ff-9887-491f0c2ccc29
	I0731 11:14:23.240376  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:23.240385  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:23.240477  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:23.738100  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:23.738129  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:23.738141  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:23.738151  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:23.740404  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:23.740422  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:23.740429  100669 round_trippers.go:580]     Audit-Id: 81278dce-d1aa-4fc3-9f28-d0a076f59be8
	I0731 11:14:23.740434  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:23.740440  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:23.740445  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:23.740451  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:23.740456  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:23 GMT
	I0731 11:14:23.740605  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:24.238325  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:24.238346  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:24.238354  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:24.238360  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:24.240766  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:24.240784  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:24.240790  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:24.240796  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:24.240802  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:24.240807  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:24.240814  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:24 GMT
	I0731 11:14:24.240819  100669 round_trippers.go:580]     Audit-Id: 9f427ab5-7e1d-4f18-860f-fafb88bbc78c
	I0731 11:14:24.240925  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:24.241215  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:24.738579  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:24.738598  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:24.738606  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:24.738612  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:24.740840  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:24.740866  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:24.740874  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:24.740881  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:24 GMT
	I0731 11:14:24.740886  100669 round_trippers.go:580]     Audit-Id: a2d52c35-9d47-4d1b-b31f-cb553f30be1f
	I0731 11:14:24.740892  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:24.740897  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:24.740902  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:24.741084  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:25.237690  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:25.237726  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:25.237734  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:25.237740  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:25.240235  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:25.240250  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:25.240257  100669 round_trippers.go:580]     Audit-Id: d92e3aa1-95da-4fbd-996f-298b8563cd3e
	I0731 11:14:25.240263  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:25.240270  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:25.240278  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:25.240286  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:25.240296  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:25 GMT
	I0731 11:14:25.240514  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:25.738091  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:25.738110  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:25.738118  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:25.738125  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:25.740737  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:25.740760  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:25.740768  100669 round_trippers.go:580]     Audit-Id: d7459fa0-d258-4531-8be2-a037393eb1b5
	I0731 11:14:25.740773  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:25.740779  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:25.740784  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:25.740793  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:25.740799  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:25 GMT
	I0731 11:14:25.740919  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:26.238482  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:26.238503  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:26.238511  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:26.238517  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:26.240780  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:26.240804  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:26.240811  100669 round_trippers.go:580]     Audit-Id: a14d8360-c8f2-40c7-b290-f335af094017
	I0731 11:14:26.240817  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:26.240823  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:26.240828  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:26.240833  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:26.240838  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:26 GMT
	I0731 11:14:26.240969  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:26.241317  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:26.738609  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:26.738629  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:26.738637  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:26.738643  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:26.741228  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:26.741250  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:26.741260  100669 round_trippers.go:580]     Audit-Id: c6e8e8a5-fd82-4b4e-b8c4-5b754fc741ad
	I0731 11:14:26.741267  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:26.741275  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:26.741282  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:26.741291  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:26.741301  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:26 GMT
	I0731 11:14:26.741420  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:27.238439  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:27.238458  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:27.238465  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:27.238471  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:27.240815  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:27.240837  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:27.240845  100669 round_trippers.go:580]     Audit-Id: 10dec016-cb11-4303-9a65-0d0a8671ec5a
	I0731 11:14:27.240851  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:27.240856  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:27.240863  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:27.240872  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:27.240880  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:27 GMT
	I0731 11:14:27.241007  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:27.738638  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:27.738660  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:27.738672  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:27.738687  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:27.741228  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:27.741246  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:27.741253  100669 round_trippers.go:580]     Audit-Id: eb044474-ee8d-48c4-84a0-670a7656e787
	I0731 11:14:27.741258  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:27.741263  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:27.741269  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:27.741274  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:27.741279  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:27 GMT
	I0731 11:14:27.741415  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:28.238000  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:28.238020  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:28.238033  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:28.238040  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:28.240573  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:28.240599  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:28.240609  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:28.240616  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:28 GMT
	I0731 11:14:28.240625  100669 round_trippers.go:580]     Audit-Id: a0864ad1-c724-4e08-9a6c-57c9427c3782
	I0731 11:14:28.240639  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:28.240658  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:28.240667  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:28.240825  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:28.738065  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:28.738098  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:28.738106  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:28.738112  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:28.740372  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:28.740395  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:28.740405  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:28.740414  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:28.740422  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:28.740435  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:28.740447  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:28 GMT
	I0731 11:14:28.740455  100669 round_trippers.go:580]     Audit-Id: 80649bbe-a7b1-46da-b95f-29249b4363db
	I0731 11:14:28.740552  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:28.740871  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:29.238149  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:29.238171  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:29.238181  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:29.238191  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:29.240387  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:29.240408  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:29.240419  100669 round_trippers.go:580]     Audit-Id: 31a1c854-d0c2-4c3b-9a82-9abfba81b120
	I0731 11:14:29.240427  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:29.240434  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:29.240441  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:29.240450  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:29.240462  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:29 GMT
	I0731 11:14:29.240595  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:29.738136  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:29.738162  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:29.738179  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:29.738186  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:29.740542  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:29.740565  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:29.740576  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:29.740586  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:29.740595  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:29.740603  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:29 GMT
	I0731 11:14:29.740612  100669 round_trippers.go:580]     Audit-Id: 876481b0-9023-48f9-a09d-2c0648c42620
	I0731 11:14:29.740620  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:29.740739  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:30.238353  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:30.238373  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:30.238381  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:30.238387  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:30.240728  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:30.240746  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:30.240754  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:30.240760  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:30.240765  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:30.240774  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:30.240781  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:30 GMT
	I0731 11:14:30.240789  100669 round_trippers.go:580]     Audit-Id: d6a8efbe-6413-4b72-bd7e-43f1c96d2d66
	I0731 11:14:30.240906  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:30.738516  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:30.738537  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:30.738546  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:30.738553  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:30.741118  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:30.741136  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:30.741143  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:30.741149  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:30.741154  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:30.741160  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:30 GMT
	I0731 11:14:30.741165  100669 round_trippers.go:580]     Audit-Id: e44a36d1-b7f6-44fe-8593-f6661a8678f8
	I0731 11:14:30.741170  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:30.741429  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:30.741740  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:31.238014  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:31.238036  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:31.238048  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:31.238055  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:31.240433  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:31.240451  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:31.240458  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:31.240464  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:31.240472  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:31.240480  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:31.240492  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:31 GMT
	I0731 11:14:31.240504  100669 round_trippers.go:580]     Audit-Id: 5bd871ca-9da3-4667-b4cb-a47ad251649a
	I0731 11:14:31.240666  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:31.738377  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:31.738406  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:31.738419  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:31.738435  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:31.740758  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:31.740781  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:31.740788  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:31.740797  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:31 GMT
	I0731 11:14:31.740807  100669 round_trippers.go:580]     Audit-Id: 4f4652dd-182a-47df-84b6-2f2949ba8c5b
	I0731 11:14:31.740817  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:31.740826  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:31.740840  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:31.740995  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:32.237779  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:32.237807  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:32.237820  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:32.237830  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:32.240180  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:32.240205  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:32.240215  100669 round_trippers.go:580]     Audit-Id: 21114637-9c7e-48f9-95df-2296d97ddebd
	I0731 11:14:32.240222  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:32.240230  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:32.240239  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:32.240255  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:32.240264  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:32 GMT
	I0731 11:14:32.240468  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:32.737943  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:32.737962  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:32.737970  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:32.737976  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:32.740247  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:32.740264  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:32.740271  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:32.740277  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:32 GMT
	I0731 11:14:32.740283  100669 round_trippers.go:580]     Audit-Id: 8a4369bd-498d-45f8-827e-a9659309191a
	I0731 11:14:32.740291  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:32.740299  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:32.740308  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:32.740431  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:33.238545  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:33.238569  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:33.238577  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:33.238584  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:33.240837  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:33.240856  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:33.240862  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:33.240870  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:33 GMT
	I0731 11:14:33.240875  100669 round_trippers.go:580]     Audit-Id: f27d1132-c53a-4563-a540-1ee80a90bdbf
	I0731 11:14:33.240880  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:33.240886  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:33.240891  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:33.240994  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:33.241290  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:33.738547  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:33.738578  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:33.738586  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:33.738592  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:33.741030  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:33.741054  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:33.741064  100669 round_trippers.go:580]     Audit-Id: 44a63766-2f9e-4334-be86-144e86b2b843
	I0731 11:14:33.741073  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:33.741081  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:33.741089  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:33.741098  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:33.741114  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:33 GMT
	I0731 11:14:33.741249  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:34.237756  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:34.237776  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:34.237783  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:34.237789  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:34.240683  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:34.240706  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:34.240719  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:34.240729  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:34.240737  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:34.240744  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:34.240750  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:34 GMT
	I0731 11:14:34.240756  100669 round_trippers.go:580]     Audit-Id: e66e0d06-341e-452e-adef-536d93d5990d
	I0731 11:14:34.240855  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:34.738481  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:34.738499  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:34.738507  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:34.738513  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:34.740767  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:34.740784  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:34.740791  100669 round_trippers.go:580]     Audit-Id: 52d74112-c051-4e16-9783-65329ee8bb28
	I0731 11:14:34.740796  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:34.740804  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:34.740809  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:34.740816  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:34.740823  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:34 GMT
	I0731 11:14:34.740903  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:35.238564  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:35.238589  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:35.238597  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:35.238603  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:35.240919  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:35.240942  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:35.240954  100669 round_trippers.go:580]     Audit-Id: 6e2f6220-4655-40ea-8333-31fb765dc78f
	I0731 11:14:35.240964  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:35.240974  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:35.240980  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:35.240989  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:35.240995  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:35 GMT
	I0731 11:14:35.241139  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:35.241508  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:35.738641  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:35.738665  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:35.738677  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:35.738688  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:35.741119  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:35.741145  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:35.741155  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:35.741164  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:35.741174  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:35 GMT
	I0731 11:14:35.741185  100669 round_trippers.go:580]     Audit-Id: 6b5774e4-84f8-4962-ba06-dd97e6594715
	I0731 11:14:35.741195  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:35.741203  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:35.741300  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:36.237822  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:36.237842  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:36.237851  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:36.237858  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:36.240260  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:36.240281  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:36.240291  100669 round_trippers.go:580]     Audit-Id: 6acc64da-2945-4f56-a1b6-23cac9b21e32
	I0731 11:14:36.240300  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:36.240309  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:36.240318  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:36.240332  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:36.240341  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:36 GMT
	I0731 11:14:36.240481  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:36.738018  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:36.738041  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:36.738057  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:36.738063  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:36.740443  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:36.740470  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:36.740480  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:36.740486  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:36.740492  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:36 GMT
	I0731 11:14:36.740500  100669 round_trippers.go:580]     Audit-Id: 8bd4b31a-c7c9-4e51-8c31-0eb9962b02a6
	I0731 11:14:36.740509  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:36.740521  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:36.740631  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:37.238644  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:37.238663  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:37.238671  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:37.238677  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:37.240950  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:37.240970  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:37.240977  100669 round_trippers.go:580]     Audit-Id: abe1fb51-c127-4fac-8976-e6552cc7c59c
	I0731 11:14:37.240982  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:37.240988  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:37.240993  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:37.240998  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:37.241007  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:37 GMT
	I0731 11:14:37.241127  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:37.241528  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:37.737704  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:37.737734  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:37.737745  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:37.737758  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:37.740126  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:37.740148  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:37.740161  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:37.740172  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:37.740181  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:37 GMT
	I0731 11:14:37.740195  100669 round_trippers.go:580]     Audit-Id: 558588e2-5427-4bf5-9e70-96d709e21858
	I0731 11:14:37.740201  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:37.740209  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:37.740305  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:38.237805  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:38.237831  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:38.237841  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:38.237851  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:38.240122  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:38.240147  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:38.240159  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:38.240170  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:38 GMT
	I0731 11:14:38.240179  100669 round_trippers.go:580]     Audit-Id: aaa142df-5e5e-464f-84b4-51a639e38e10
	I0731 11:14:38.240188  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:38.240200  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:38.240210  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:38.240622  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:38.737760  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:38.737780  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:38.737790  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:38.737799  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:38.740150  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:38.740171  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:38.740178  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:38.740185  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:38 GMT
	I0731 11:14:38.740191  100669 round_trippers.go:580]     Audit-Id: 3b0421c4-994c-4ea1-974b-f10c62f7bbce
	I0731 11:14:38.740197  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:38.740207  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:38.740223  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:38.740347  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:39.237844  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:39.237874  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:39.237882  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:39.237890  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:39.240268  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:39.240287  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:39.240294  100669 round_trippers.go:580]     Audit-Id: 43a0824d-d216-48c5-80f7-2a4a126eb46d
	I0731 11:14:39.240300  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:39.240306  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:39.240313  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:39.240322  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:39.240335  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:39 GMT
	I0731 11:14:39.240490  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:39.738024  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:39.738045  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:39.738052  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:39.738058  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:39.740340  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:39.740361  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:39.740368  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:39.740374  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:39.740379  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:39 GMT
	I0731 11:14:39.740385  100669 round_trippers.go:580]     Audit-Id: ed96f1cd-dda8-4ecf-9497-df8ce6328d9d
	I0731 11:14:39.740393  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:39.740402  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:39.740522  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:39.740837  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:40.238087  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:40.238107  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:40.238116  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:40.238122  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:40.240325  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:40.240348  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:40.240358  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:40.240367  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:40.240377  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:40 GMT
	I0731 11:14:40.240403  100669 round_trippers.go:580]     Audit-Id: 9da8cb31-53ed-4d35-8b6d-f047357c30ae
	I0731 11:14:40.240414  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:40.240419  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:40.240558  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:40.738058  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:40.738077  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:40.738085  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:40.738091  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:40.740475  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:40.740506  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:40.740518  100669 round_trippers.go:580]     Audit-Id: f77245f3-6a41-4416-8746-314573e46fdc
	I0731 11:14:40.740527  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:40.740536  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:40.740546  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:40.740564  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:40.740573  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:40 GMT
	I0731 11:14:40.740688  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:41.238332  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:41.238359  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:41.238372  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:41.238382  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:41.240970  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:41.240995  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:41.241006  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:41.241016  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:41 GMT
	I0731 11:14:41.241025  100669 round_trippers.go:580]     Audit-Id: 80acb557-c22b-4ff1-b07f-5197c130a05f
	I0731 11:14:41.241037  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:41.241048  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:41.241054  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:41.241187  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:41.737648  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:41.737666  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:41.737674  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:41.737681  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:41.740134  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:41.740155  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:41.740164  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:41.740171  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:41.740178  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:41.740186  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:41 GMT
	I0731 11:14:41.740194  100669 round_trippers.go:580]     Audit-Id: c408e958-a5d2-4ace-9cdf-b601ccfa77e4
	I0731 11:14:41.740201  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:41.740295  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:42.238246  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:42.238265  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:42.238276  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:42.238283  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:42.240763  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:42.240790  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:42.240800  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:42.240810  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:42.240820  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:42.240830  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:42 GMT
	I0731 11:14:42.240840  100669 round_trippers.go:580]     Audit-Id: 4e183c97-da64-485b-a6d2-5537b8ab94d1
	I0731 11:14:42.240849  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:42.240959  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:42.241257  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:42.738608  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:42.738642  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:42.738650  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:42.738657  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:42.740889  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:42.740908  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:42.740915  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:42.740921  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:42.740926  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:42 GMT
	I0731 11:14:42.740931  100669 round_trippers.go:580]     Audit-Id: 33c78e8a-0c8e-450f-a85b-a254b3b4bb13
	I0731 11:14:42.740936  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:42.740941  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:42.741037  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:43.237646  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:43.237666  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:43.237674  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:43.237680  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:43.239822  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:43.239841  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:43.239852  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:43.239860  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:43 GMT
	I0731 11:14:43.239867  100669 round_trippers.go:580]     Audit-Id: 1a8502ca-2ce5-4dda-ab8e-6df62eb4cf25
	I0731 11:14:43.239891  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:43.239901  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:43.239912  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:43.240039  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:43.738663  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:43.738682  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:43.738690  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:43.738696  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:43.741050  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:43.741071  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:43.741083  100669 round_trippers.go:580]     Audit-Id: a9c573ad-f15a-4c74-8830-3af7c136892f
	I0731 11:14:43.741089  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:43.741095  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:43.741101  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:43.741107  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:43.741113  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:43 GMT
	I0731 11:14:43.741216  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:44.237686  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:44.237705  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:44.237713  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:44.237719  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:44.240284  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:44.240313  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:44.240324  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:44.240333  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:44.240343  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:44 GMT
	I0731 11:14:44.240351  100669 round_trippers.go:580]     Audit-Id: dc43e070-d913-4b5e-a68e-9022f6fcf533
	I0731 11:14:44.240363  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:44.240374  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:44.240512  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:44.738068  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:44.738088  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:44.738096  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:44.738102  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:44.740383  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:44.740406  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:44.740415  100669 round_trippers.go:580]     Audit-Id: 1afb84f5-5623-4845-bbba-2ef237ecd7b5
	I0731 11:14:44.740423  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:44.740449  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:44.740462  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:44.740475  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:44.740486  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:44 GMT
	I0731 11:14:44.740582  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:44.740895  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:45.238086  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:45.238105  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:45.238113  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:45.238119  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:45.240533  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:45.240554  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:45.240563  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:45.240571  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:45 GMT
	I0731 11:14:45.240578  100669 round_trippers.go:580]     Audit-Id: 91316978-bdca-43c6-b1e5-aa097f20d17b
	I0731 11:14:45.240585  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:45.240593  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:45.240604  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:45.240741  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:45.738347  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:45.738368  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:45.738376  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:45.738382  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:45.740620  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:45.740649  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:45.740660  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:45.740670  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:45.740679  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:45.740688  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:45.740699  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:45 GMT
	I0731 11:14:45.740705  100669 round_trippers.go:580]     Audit-Id: a6090dcf-5a5c-4077-bf9c-6131811ed61a
	I0731 11:14:45.740815  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:46.238428  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:46.238447  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:46.238455  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:46.238462  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:46.240730  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:46.240749  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:46.240757  100669 round_trippers.go:580]     Audit-Id: 7a91c9fd-4714-458d-a4e5-a14925ce8ade
	I0731 11:14:46.240765  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:46.240774  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:46.240783  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:46.240792  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:46.240801  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:46 GMT
	I0731 11:14:46.240926  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:46.738526  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:46.738545  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:46.738553  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:46.738559  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:46.740811  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:46.740837  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:46.740847  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:46 GMT
	I0731 11:14:46.740855  100669 round_trippers.go:580]     Audit-Id: 7dd3d85d-660d-4e9a-a948-5a6bcd6d4c69
	I0731 11:14:46.740861  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:46.740866  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:46.740874  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:46.740880  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:46.741032  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:46.741360  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:47.237971  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:47.237992  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:47.238000  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:47.238006  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:47.240212  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:47.240238  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:47.240249  100669 round_trippers.go:580]     Audit-Id: 92c0c7cc-7c6d-4708-ba83-879c28723c62
	I0731 11:14:47.240258  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:47.240266  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:47.240277  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:47.240286  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:47.240299  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:47 GMT
	I0731 11:14:47.240422  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:47.737723  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:47.737761  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:47.737772  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:47.737779  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:47.740038  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:47.740058  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:47.740065  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:47.740071  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:47.740077  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:47.740082  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:47 GMT
	I0731 11:14:47.740091  100669 round_trippers.go:580]     Audit-Id: 3583d2e5-f17a-4349-9ac0-05ef6f1ee662
	I0731 11:14:47.740100  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:47.740234  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:48.237745  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:48.237768  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:48.237779  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:48.237787  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:48.240172  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:48.240200  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:48.240211  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:48 GMT
	I0731 11:14:48.240220  100669 round_trippers.go:580]     Audit-Id: 5af34bf4-0209-4158-802d-98790039e609
	I0731 11:14:48.240226  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:48.240235  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:48.240242  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:48.240252  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:48.240403  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:48.737814  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:48.737833  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:48.737841  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:48.737848  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:48.740018  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:48.740039  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:48.740047  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:48.740052  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:48.740059  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:48.740065  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:48 GMT
	I0731 11:14:48.740074  100669 round_trippers.go:580]     Audit-Id: c5be2ef3-7e12-4613-9e31-e2a4a51eab62
	I0731 11:14:48.740083  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:48.740204  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:49.237738  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:49.237757  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:49.237781  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:49.237792  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:49.240423  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:49.240440  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:49.240447  100669 round_trippers.go:580]     Audit-Id: 7a30e6d5-394d-4471-8cbc-9b7c9e9e77ea
	I0731 11:14:49.240453  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:49.240459  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:49.240464  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:49.240473  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:49.240481  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:49 GMT
	I0731 11:14:49.240605  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:49.240970  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:49.738127  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:49.738150  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:49.738159  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:49.738165  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:49.740430  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:49.740449  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:49.740456  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:49.740462  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:49 GMT
	I0731 11:14:49.740470  100669 round_trippers.go:580]     Audit-Id: 894bbdb2-6d11-478e-bb6e-8f86e9a39b94
	I0731 11:14:49.740479  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:49.740487  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:49.740504  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:49.740602  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:50.238264  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:50.238284  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:50.238292  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:50.238298  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:50.240778  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:50.240800  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:50.240812  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:50.240821  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:50.240831  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:50.240840  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:50 GMT
	I0731 11:14:50.240848  100669 round_trippers.go:580]     Audit-Id: 98327bbb-3839-40fc-8103-d3ce2c550698
	I0731 11:14:50.240854  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:50.240999  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:50.738471  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:50.738493  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:50.738502  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:50.738508  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:50.740718  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:50.740743  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:50.740754  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:50.740763  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:50 GMT
	I0731 11:14:50.740769  100669 round_trippers.go:580]     Audit-Id: 1b6dd840-d18e-47ca-b218-b4573ea84557
	I0731 11:14:50.740775  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:50.740780  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:50.740786  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:50.740869  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:51.238548  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:51.238571  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:51.238580  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:51.238586  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:51.241014  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:51.241039  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:51.241050  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:51.241059  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:51.241068  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:51 GMT
	I0731 11:14:51.241075  100669 round_trippers.go:580]     Audit-Id: 35a7cd03-f326-433e-a8d6-477cd490682c
	I0731 11:14:51.241081  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:51.241086  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:51.241187  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:51.241521  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:51.737681  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:51.737718  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:51.737730  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:51.737739  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:51.740164  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:51.740192  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:51.740204  100669 round_trippers.go:580]     Audit-Id: a6b3e704-8ed9-4ed4-a30e-717510f0ebbf
	I0731 11:14:51.740219  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:51.740229  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:51.740247  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:51.740260  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:51.740273  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:51 GMT
	I0731 11:14:51.740396  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:52.238253  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:52.238276  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:52.238288  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:52.238298  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:52.240644  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:52.240672  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:52.240682  100669 round_trippers.go:580]     Audit-Id: 43e0d2d1-0790-4467-9da0-e4adf4e280f4
	I0731 11:14:52.240691  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:52.240698  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:52.240703  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:52.240708  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:52.240714  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:52 GMT
	I0731 11:14:52.240814  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:52.738486  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:52.738510  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:52.738523  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:52.738533  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:52.740859  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:52.740886  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:52.740896  100669 round_trippers.go:580]     Audit-Id: 7d50a8ca-16b2-40da-a389-807f9f5c5f5d
	I0731 11:14:52.740906  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:52.740915  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:52.740926  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:52.740938  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:52.740943  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:52 GMT
	I0731 11:14:52.741041  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:53.238717  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:53.238740  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.238752  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.238762  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.241275  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:53.241296  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.241307  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.241317  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.241323  100669 round_trippers.go:580]     Audit-Id: 668faeac-bb2e-4680-8ab3-1acd1b793c74
	I0731 11:14:53.241329  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.241336  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.241344  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.241500  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"309","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0731 11:14:53.241825  100669 node_ready.go:58] node "multinode-249026" has status "Ready":"False"
	I0731 11:14:53.737969  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:53.737993  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.738010  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.738020  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.740284  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:53.740309  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.740319  100669 round_trippers.go:580]     Audit-Id: 2c6483d2-cb3c-4ef1-a1a4-945f53997545
	I0731 11:14:53.740328  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.740341  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.740350  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.740364  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.740375  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.740468  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:53.740768  100669 node_ready.go:49] node "multinode-249026" has status "Ready":"True"
	I0731 11:14:53.740783  100669 node_ready.go:38] duration metric: took 31.573737071s waiting for node "multinode-249026" to be "Ready" ...
	I0731 11:14:53.740792  100669 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:14:53.740839  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:14:53.740848  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.740855  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.740861  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.743808  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:53.743835  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.743847  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.743854  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.743863  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.743871  100669 round_trippers.go:580]     Audit-Id: 5ac94583-313f-4b66-8dda-12af392ae0a9
	I0731 11:14:53.743898  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.743913  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.744360  100669 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"406","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0731 11:14:53.747336  100669 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:53.747398  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z57mv
	I0731 11:14:53.747407  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.747415  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.747422  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.749378  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:53.749399  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.749409  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.749418  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.749426  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.749434  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.749446  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.749463  100669 round_trippers.go:580]     Audit-Id: 018c81c4-1723-44d6-8951-fc271a779494
	I0731 11:14:53.749544  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"406","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 11:14:53.749945  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:53.749957  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.749964  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.749971  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.751664  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:53.751678  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.751684  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.751690  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.751695  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.751701  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.751708  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.751716  100669 round_trippers.go:580]     Audit-Id: b0bbd50d-342e-4af4-bdc6-d7156dcaeb36
	I0731 11:14:53.751937  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:53.752262  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z57mv
	I0731 11:14:53.752275  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.752285  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.752295  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.755575  100669 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 11:14:53.755593  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.755603  100669 round_trippers.go:580]     Audit-Id: e114d3a6-033a-4055-94b5-34649e8c1b67
	I0731 11:14:53.755613  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.755621  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.755632  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.755641  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.755652  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.755759  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"406","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 11:14:53.756262  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:53.756280  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:53.756291  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:53.756298  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:53.758161  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:53.758189  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:53.758199  100669 round_trippers.go:580]     Audit-Id: bda4436e-94b7-43fa-bead-61fd530b05b8
	I0731 11:14:53.758208  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:53.758229  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:53.758242  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:53.758255  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:53.758267  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:53 GMT
	I0731 11:14:53.758375  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.259009  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z57mv
	I0731 11:14:54.259030  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.259042  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.259062  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.261547  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.261575  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.261587  100669 round_trippers.go:580]     Audit-Id: bb69860a-9465-49b1-b843-310b3bceefe1
	I0731 11:14:54.261596  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.261605  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.261614  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.261621  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.261627  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.261796  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"406","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0731 11:14:54.262256  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.262269  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.262276  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.262282  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.264394  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.264417  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.264429  100669 round_trippers.go:580]     Audit-Id: 1edb62b6-c0a1-47ee-ac68-8fcf4beebdf9
	I0731 11:14:54.264439  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.264448  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.264456  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.264467  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.264473  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.264584  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.759809  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z57mv
	I0731 11:14:54.759831  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.759841  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.759848  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.762498  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.762519  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.762529  100669 round_trippers.go:580]     Audit-Id: 2616ae8b-0423-47ef-9b56-faf3d97b2a5f
	I0731 11:14:54.762537  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.762546  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.762554  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.762563  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.762573  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.762741  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"417","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0731 11:14:54.763351  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.763369  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.763381  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.763394  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.765559  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.765593  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.765602  100669 round_trippers.go:580]     Audit-Id: 7466e65d-2f52-4667-bbdc-62ff80419a2a
	I0731 11:14:54.765613  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.765622  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.765633  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.765646  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.765657  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.765799  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.766113  100669 pod_ready.go:92] pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:54.766130  100669 pod_ready.go:81] duration metric: took 1.018773946s waiting for pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.766142  100669 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.766195  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-249026
	I0731 11:14:54.766204  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.766215  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.766225  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.768455  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.768470  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.768477  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.768482  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.768488  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.768493  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.768502  100669 round_trippers.go:580]     Audit-Id: 32ef198e-a69c-497d-9209-66ccf22c3daf
	I0731 11:14:54.768511  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.768614  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-249026","namespace":"kube-system","uid":"2fd5af09-4d3d-44e9-a37e-9cfd2a7def67","resourceVersion":"388","creationTimestamp":"2023-07-31T11:14:08Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5e9e96d0b4488e99389e41cddc8a43f6","kubernetes.io/config.mirror":"5e9e96d0b4488e99389e41cddc8a43f6","kubernetes.io/config.seen":"2023-07-31T11:14:02.981941512Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0731 11:14:54.768940  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.768952  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.768959  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.768965  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.770725  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:54.770740  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.770747  100669 round_trippers.go:580]     Audit-Id: b7c3654e-c66e-429c-adbc-640addb87bb1
	I0731 11:14:54.770753  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.770759  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.770764  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.770770  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.770775  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.770868  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.771145  100669 pod_ready.go:92] pod "etcd-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:54.771156  100669 pod_ready.go:81] duration metric: took 5.007453ms waiting for pod "etcd-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.771166  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.771206  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-249026
	I0731 11:14:54.771213  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.771220  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.771225  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.772915  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:54.772938  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.772949  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.772957  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.772963  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.772969  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.772977  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.772983  100669 round_trippers.go:580]     Audit-Id: 85859e07-1b98-4e07-a572-306475ee052d
	I0731 11:14:54.773088  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-249026","namespace":"kube-system","uid":"0978e9cd-dfdc-4299-b370-eecc072de5cd","resourceVersion":"390","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"fbdd26f0ce94907fb765628c686054b9","kubernetes.io/config.mirror":"fbdd26f0ce94907fb765628c686054b9","kubernetes.io/config.seen":"2023-07-31T11:14:09.344025425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0731 11:14:54.773436  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.773447  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.773454  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.773461  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.775057  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:54.775070  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.775077  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.775084  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.775093  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.775101  100669 round_trippers.go:580]     Audit-Id: d80ac75b-3a26-4235-a23d-4e2218cf253b
	I0731 11:14:54.775113  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.775125  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.775256  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.775516  100669 pod_ready.go:92] pod "kube-apiserver-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:54.775528  100669 pod_ready.go:81] duration metric: took 4.356317ms waiting for pod "kube-apiserver-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.775536  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.775574  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-249026
	I0731 11:14:54.775582  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.775589  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.775595  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.777203  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:14:54.777217  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.777224  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.777230  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.777235  100669 round_trippers.go:580]     Audit-Id: b9b0e802-f6e8-45a2-85a9-efa1cca1e329
	I0731 11:14:54.777241  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.777246  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.777252  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.777395  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-249026","namespace":"kube-system","uid":"0b6e8e07-eb1c-4e59-b299-3983e587571c","resourceVersion":"389","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26066d183dc51193d263b4e20f2cec66","kubernetes.io/config.mirror":"26066d183dc51193d263b4e20f2cec66","kubernetes.io/config.seen":"2023-07-31T11:14:09.344026956Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0731 11:14:54.937990  100669 request.go:628] Waited for 160.239281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.938053  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:54.938058  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:54.938065  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:54.938071  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:54.940410  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:54.940433  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:54.940446  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:54 GMT
	I0731 11:14:54.940456  100669 round_trippers.go:580]     Audit-Id: 14aa3688-7088-4289-8038-75a882af69b7
	I0731 11:14:54.940467  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:54.940473  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:54.940479  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:54.940486  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:54.940603  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:54.940993  100669 pod_ready.go:92] pod "kube-controller-manager-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:54.941015  100669 pod_ready.go:81] duration metric: took 165.473595ms waiting for pod "kube-controller-manager-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:54.941026  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f64nn" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:55.138521  100669 request.go:628] Waited for 197.407396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f64nn
	I0731 11:14:55.138569  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f64nn
	I0731 11:14:55.138576  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.138584  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.138590  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.140985  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:55.141009  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.141020  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.141027  100669 round_trippers.go:580]     Audit-Id: 9eada8b9-d7fa-4593-ae5a-122ec6ebe5a2
	I0731 11:14:55.141033  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.141038  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.141044  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.141052  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.141197  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f64nn","generateName":"kube-proxy-","namespace":"kube-system","uid":"c18aed2c-dab2-4dff-b288-2e39582688bb","resourceVersion":"384","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc6c1bb2-2508-44c2-864c-44710ecfc28b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc6c1bb2-2508-44c2-864c-44710ecfc28b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0731 11:14:55.338590  100669 request.go:628] Waited for 196.973231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:55.338639  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:55.338644  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.338655  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.338662  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.341225  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:55.341253  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.341264  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.341273  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.341279  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.341285  100669 round_trippers.go:580]     Audit-Id: e88bb5f2-a22a-46f1-9983-344c2d517065
	I0731 11:14:55.341291  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.341300  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.341415  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:55.341832  100669 pod_ready.go:92] pod "kube-proxy-f64nn" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:55.341849  100669 pod_ready.go:81] duration metric: took 400.811691ms waiting for pod "kube-proxy-f64nn" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:55.341861  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:55.538294  100669 request.go:628] Waited for 196.362221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-249026
	I0731 11:14:55.538343  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-249026
	I0731 11:14:55.538357  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.538368  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.538380  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.541027  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:55.541047  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.541054  100669 round_trippers.go:580]     Audit-Id: 4d7e5aea-d662-4ca3-b873-58dd9b7feeb2
	I0731 11:14:55.541060  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.541065  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.541070  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.541076  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.541082  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.541241  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-249026","namespace":"kube-system","uid":"5e54a5cf-023c-4ed1-a1df-e67e9b4ca1f1","resourceVersion":"391","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"32a0babedab845278fd2b3f9ddf28116","kubernetes.io/config.mirror":"32a0babedab845278fd2b3f9ddf28116","kubernetes.io/config.seen":"2023-07-31T11:14:09.344028171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0731 11:14:55.739076  100669 request.go:628] Waited for 197.368278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:55.739127  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:14:55.739131  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.739138  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.739145  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.741548  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:55.741568  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.741583  100669 round_trippers.go:580]     Audit-Id: c5d849b1-a082-4833-a750-0ecaeb544ee4
	I0731 11:14:55.741602  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.741611  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.741617  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.741622  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.741631  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.741733  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:14:55.742024  100669 pod_ready.go:92] pod "kube-scheduler-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:14:55.742036  100669 pod_ready.go:81] duration metric: took 400.162224ms waiting for pod "kube-scheduler-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:14:55.742047  100669 pod_ready.go:38] duration metric: took 2.001245858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:14:55.742061  100669 api_server.go:52] waiting for apiserver process to appear ...
	I0731 11:14:55.742145  100669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 11:14:55.751784  100669 command_runner.go:130] > 1446
	I0731 11:14:55.752574  100669 api_server.go:72] duration metric: took 33.681281958s to wait for apiserver process to appear ...
	I0731 11:14:55.752595  100669 api_server.go:88] waiting for apiserver healthz status ...
	I0731 11:14:55.752614  100669 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 11:14:55.757553  100669 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 11:14:55.757606  100669 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0731 11:14:55.757611  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.757619  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.757628  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.758602  100669 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 11:14:55.758618  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.758624  100669 round_trippers.go:580]     Content-Length: 263
	I0731 11:14:55.758633  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.758639  100669 round_trippers.go:580]     Audit-Id: 0b3ce03b-edfb-498d-bd7a-0f352d905da3
	I0731 11:14:55.758644  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.758649  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.758655  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.758663  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.758678  100669 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0731 11:14:55.758748  100669 api_server.go:141] control plane version: v1.27.3
	I0731 11:14:55.758762  100669 api_server.go:131] duration metric: took 6.155632ms to wait for apiserver health ...
	I0731 11:14:55.758768  100669 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 11:14:55.938133  100669 request.go:628] Waited for 179.291836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:14:55.938180  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:14:55.938185  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:55.938192  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:55.938198  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:55.950446  100669 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0731 11:14:55.950480  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:55.950493  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:55.950503  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:55.950513  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:55.950523  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:55 GMT
	I0731 11:14:55.950533  100669 round_trippers.go:580]     Audit-Id: 4ca7e12a-e36b-4990-ac2b-4a8102028ca9
	I0731 11:14:55.950561  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:55.951146  100669 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"417","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0731 11:14:55.954022  100669 system_pods.go:59] 8 kube-system pods found
	I0731 11:14:55.954056  100669 system_pods.go:61] "coredns-5d78c9869d-z57mv" [10ff228c-5c0d-4012-8b2c-79ff8210e4e1] Running
	I0731 11:14:55.954064  100669 system_pods.go:61] "etcd-multinode-249026" [2fd5af09-4d3d-44e9-a37e-9cfd2a7def67] Running
	I0731 11:14:55.954071  100669 system_pods.go:61] "kindnet-pgkb6" [ad194032-2c3b-484e-9c89-9b7bc72632b6] Running
	I0731 11:14:55.954090  100669 system_pods.go:61] "kube-apiserver-multinode-249026" [0978e9cd-dfdc-4299-b370-eecc072de5cd] Running
	I0731 11:14:55.954101  100669 system_pods.go:61] "kube-controller-manager-multinode-249026" [0b6e8e07-eb1c-4e59-b299-3983e587571c] Running
	I0731 11:14:55.954108  100669 system_pods.go:61] "kube-proxy-f64nn" [c18aed2c-dab2-4dff-b288-2e39582688bb] Running
	I0731 11:14:55.954122  100669 system_pods.go:61] "kube-scheduler-multinode-249026" [5e54a5cf-023c-4ed1-a1df-e67e9b4ca1f1] Running
	I0731 11:14:55.954136  100669 system_pods.go:61] "storage-provisioner" [f702705a-8dec-4ac9-98fd-283e1f55614b] Running
	I0731 11:14:55.954144  100669 system_pods.go:74] duration metric: took 195.370167ms to wait for pod list to return data ...
	I0731 11:14:55.954169  100669 default_sa.go:34] waiting for default service account to be created ...
	I0731 11:14:56.138585  100669 request.go:628] Waited for 184.343772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 11:14:56.138639  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0731 11:14:56.138645  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:56.138658  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:56.138670  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:56.140959  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:56.140981  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:56.140991  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:56.140999  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:56.141008  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:56.141018  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:56.141033  100669 round_trippers.go:580]     Content-Length: 261
	I0731 11:14:56.141043  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:56 GMT
	I0731 11:14:56.141055  100669 round_trippers.go:580]     Audit-Id: 2b0770c3-4a21-4323-ad72-a9fecad04081
	I0731 11:14:56.141095  100669 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8139163c-c900-4091-bb16-e12d04bf07b5","resourceVersion":"324","creationTimestamp":"2023-07-31T11:14:21Z"}}]}
	I0731 11:14:56.141298  100669 default_sa.go:45] found service account: "default"
	I0731 11:14:56.141316  100669 default_sa.go:55] duration metric: took 187.137322ms for default service account to be created ...
	I0731 11:14:56.141327  100669 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 11:14:56.338709  100669 request.go:628] Waited for 197.320024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:14:56.338765  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:14:56.338772  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:56.338784  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:56.338798  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:56.341990  100669 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 11:14:56.342010  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:56.342019  100669 round_trippers.go:580]     Audit-Id: 88f6b647-19d3-4909-8fbb-90cbcbe2596a
	I0731 11:14:56.342025  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:56.342030  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:56.342036  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:56.342048  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:56.342056  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:56 GMT
	I0731 11:14:56.342549  100669 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"417","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0731 11:14:56.344255  100669 system_pods.go:86] 8 kube-system pods found
	I0731 11:14:56.344274  100669 system_pods.go:89] "coredns-5d78c9869d-z57mv" [10ff228c-5c0d-4012-8b2c-79ff8210e4e1] Running
	I0731 11:14:56.344279  100669 system_pods.go:89] "etcd-multinode-249026" [2fd5af09-4d3d-44e9-a37e-9cfd2a7def67] Running
	I0731 11:14:56.344283  100669 system_pods.go:89] "kindnet-pgkb6" [ad194032-2c3b-484e-9c89-9b7bc72632b6] Running
	I0731 11:14:56.344287  100669 system_pods.go:89] "kube-apiserver-multinode-249026" [0978e9cd-dfdc-4299-b370-eecc072de5cd] Running
	I0731 11:14:56.344292  100669 system_pods.go:89] "kube-controller-manager-multinode-249026" [0b6e8e07-eb1c-4e59-b299-3983e587571c] Running
	I0731 11:14:56.344299  100669 system_pods.go:89] "kube-proxy-f64nn" [c18aed2c-dab2-4dff-b288-2e39582688bb] Running
	I0731 11:14:56.344305  100669 system_pods.go:89] "kube-scheduler-multinode-249026" [5e54a5cf-023c-4ed1-a1df-e67e9b4ca1f1] Running
	I0731 11:14:56.344313  100669 system_pods.go:89] "storage-provisioner" [f702705a-8dec-4ac9-98fd-283e1f55614b] Running
	I0731 11:14:56.344319  100669 system_pods.go:126] duration metric: took 202.987424ms to wait for k8s-apps to be running ...
	I0731 11:14:56.344328  100669 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 11:14:56.344369  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:14:56.354755  100669 system_svc.go:56] duration metric: took 10.418235ms WaitForService to wait for kubelet.
	I0731 11:14:56.354783  100669 kubeadm.go:581] duration metric: took 34.283487359s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 11:14:56.354811  100669 node_conditions.go:102] verifying NodePressure condition ...
	I0731 11:14:56.538257  100669 request.go:628] Waited for 183.370497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0731 11:14:56.538310  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0731 11:14:56.538315  100669 round_trippers.go:469] Request Headers:
	I0731 11:14:56.538323  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:14:56.538329  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:14:56.540655  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:14:56.540680  100669 round_trippers.go:577] Response Headers:
	I0731 11:14:56.540690  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:14:56 GMT
	I0731 11:14:56.540699  100669 round_trippers.go:580]     Audit-Id: 6d932e48-03af-4fca-8a7e-ee90b98dd6d6
	I0731 11:14:56.540708  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:14:56.540751  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:14:56.540762  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:14:56.540771  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:14:56.540880  100669 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0731 11:14:56.541338  100669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0731 11:14:56.541359  100669 node_conditions.go:123] node cpu capacity is 8
	I0731 11:14:56.541373  100669 node_conditions.go:105] duration metric: took 186.553721ms to run NodePressure ...
	I0731 11:14:56.541388  100669 start.go:228] waiting for startup goroutines ...
	I0731 11:14:56.541397  100669 start.go:233] waiting for cluster config update ...
	I0731 11:14:56.541414  100669 start.go:242] writing updated cluster config ...
	I0731 11:14:56.543933  100669 out.go:177] 
	I0731 11:14:56.545759  100669 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:14:56.545866  100669 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json ...
	I0731 11:14:56.547710  100669 out.go:177] * Starting worker node multinode-249026-m02 in cluster multinode-249026
	I0731 11:14:56.549077  100669 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:14:56.550535  100669 out.go:177] * Pulling base image ...
	I0731 11:14:56.552374  100669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:14:56.552404  100669 cache.go:57] Caching tarball of preloaded images
	I0731 11:14:56.552403  100669 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:14:56.552531  100669 preload.go:174] Found /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 11:14:56.552550  100669 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0731 11:14:56.552666  100669 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json ...
	I0731 11:14:56.568519  100669 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:14:56.568543  100669 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0731 11:14:56.568566  100669 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:14:56.568599  100669 start.go:365] acquiring machines lock for multinode-249026-m02: {Name:mkfe77b6132f6603f605b39d25f2ee92c8eb0c4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:14:56.568711  100669 start.go:369] acquired machines lock for "multinode-249026-m02" in 90.407µs
	I0731 11:14:56.568737  100669 start.go:93] Provisioning new machine with config: &{Name:multinode-249026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 11:14:56.568838  100669 start.go:125] createHost starting for "m02" (driver="docker")
	I0731 11:14:56.571924  100669 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 11:14:56.572048  100669 start.go:159] libmachine.API.Create for "multinode-249026" (driver="docker")
	I0731 11:14:56.572073  100669 client.go:168] LocalClient.Create starting
	I0731 11:14:56.572154  100669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem
	I0731 11:14:56.572189  100669 main.go:141] libmachine: Decoding PEM data...
	I0731 11:14:56.572209  100669 main.go:141] libmachine: Parsing certificate...
	I0731 11:14:56.572275  100669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem
	I0731 11:14:56.572300  100669 main.go:141] libmachine: Decoding PEM data...
	I0731 11:14:56.572319  100669 main.go:141] libmachine: Parsing certificate...
	I0731 11:14:56.572588  100669 cli_runner.go:164] Run: docker network inspect multinode-249026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:14:56.588351  100669 network_create.go:76] Found existing network {name:multinode-249026 subnet:0xc000d9e3c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0731 11:14:56.588395  100669 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-249026-m02" container
	I0731 11:14:56.588455  100669 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 11:14:56.604011  100669 cli_runner.go:164] Run: docker volume create multinode-249026-m02 --label name.minikube.sigs.k8s.io=multinode-249026-m02 --label created_by.minikube.sigs.k8s.io=true
	I0731 11:14:56.619668  100669 oci.go:103] Successfully created a docker volume multinode-249026-m02
	I0731 11:14:56.619727  100669 cli_runner.go:164] Run: docker run --rm --name multinode-249026-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-249026-m02 --entrypoint /usr/bin/test -v multinode-249026-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0731 11:14:57.140238  100669 oci.go:107] Successfully prepared a docker volume multinode-249026-m02
	I0731 11:14:57.140281  100669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 11:14:57.140301  100669 kic.go:190] Starting extracting preloaded images to volume ...
	I0731 11:14:57.140368  100669 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-249026-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 11:15:02.089284  100669 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-249026-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.948872283s)
	I0731 11:15:02.089315  100669 kic.go:199] duration metric: took 4.949011 seconds to extract preloaded images to volume
	W0731 11:15:02.089426  100669 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0731 11:15:02.089505  100669 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0731 11:15:02.139403  100669 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-249026-m02 --name multinode-249026-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-249026-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-249026-m02 --network multinode-249026 --ip 192.168.58.3 --volume multinode-249026-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:15:02.414325  100669 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Running}}
	I0731 11:15:02.430598  100669 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Status}}
	I0731 11:15:02.448310  100669 cli_runner.go:164] Run: docker exec multinode-249026-m02 stat /var/lib/dpkg/alternatives/iptables
	I0731 11:15:02.515638  100669 oci.go:144] the created container "multinode-249026-m02" has a running status.
	I0731 11:15:02.515670  100669 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa...
	I0731 11:15:02.714885  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0731 11:15:02.714928  100669 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0731 11:15:02.734116  100669 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Status}}
	I0731 11:15:02.752954  100669 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0731 11:15:02.752984  100669 kic_runner.go:114] Args: [docker exec --privileged multinode-249026-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0731 11:15:02.845293  100669 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Status}}
	I0731 11:15:02.864166  100669 machine.go:88] provisioning docker machine ...
	I0731 11:15:02.864202  100669 ubuntu.go:169] provisioning hostname "multinode-249026-m02"
	I0731 11:15:02.864255  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:02.889259  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:15:02.889670  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0731 11:15:02.889688  100669 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-249026-m02 && echo "multinode-249026-m02" | sudo tee /etc/hostname
	I0731 11:15:03.118653  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-249026-m02
	
	I0731 11:15:03.118731  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:03.137834  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:15:03.138339  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0731 11:15:03.138361  100669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-249026-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-249026-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-249026-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:15:03.267712  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:15:03.267738  100669 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 11:15:03.267755  100669 ubuntu.go:177] setting up certificates
	I0731 11:15:03.267763  100669 provision.go:83] configureAuth start
	I0731 11:15:03.267819  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026-m02
	I0731 11:15:03.283414  100669 provision.go:138] copyHostCerts
	I0731 11:15:03.283449  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:15:03.283482  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem, removing ...
	I0731 11:15:03.283494  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:15:03.283564  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 11:15:03.283631  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:15:03.283648  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem, removing ...
	I0731 11:15:03.283652  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:15:03.283674  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 11:15:03.283748  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:15:03.283770  100669 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem, removing ...
	I0731 11:15:03.283773  100669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:15:03.283794  100669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 11:15:03.283838  100669 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.multinode-249026-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-249026-m02]
	I0731 11:15:03.419986  100669 provision.go:172] copyRemoteCerts
	I0731 11:15:03.420038  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:15:03.420070  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:03.435923  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:03.528499  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 11:15:03.528568  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:15:03.550079  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 11:15:03.550153  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0731 11:15:03.571541  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 11:15:03.571596  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:15:03.593186  100669 provision.go:86] duration metric: configureAuth took 325.410002ms
	I0731 11:15:03.593216  100669 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:15:03.593424  100669 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:15:03.593532  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:03.609378  100669 main.go:141] libmachine: Using SSH client type: native
	I0731 11:15:03.609800  100669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0731 11:15:03.609823  100669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:15:03.819413  100669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:15:03.819440  100669 machine.go:91] provisioned docker machine in 955.250829ms
	I0731 11:15:03.819449  100669 client.go:171] LocalClient.Create took 7.247370927s
	I0731 11:15:03.819469  100669 start.go:167] duration metric: libmachine.API.Create for "multinode-249026" took 7.247420369s
	I0731 11:15:03.819479  100669 start.go:300] post-start starting for "multinode-249026-m02" (driver="docker")
	I0731 11:15:03.819488  100669 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:15:03.819539  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:15:03.819576  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:03.835801  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:03.928700  100669 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:15:03.931654  100669 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0731 11:15:03.931673  100669 command_runner.go:130] > NAME="Ubuntu"
	I0731 11:15:03.931681  100669 command_runner.go:130] > VERSION_ID="22.04"
	I0731 11:15:03.931688  100669 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0731 11:15:03.931695  100669 command_runner.go:130] > VERSION_CODENAME=jammy
	I0731 11:15:03.931701  100669 command_runner.go:130] > ID=ubuntu
	I0731 11:15:03.931707  100669 command_runner.go:130] > ID_LIKE=debian
	I0731 11:15:03.931714  100669 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0731 11:15:03.931722  100669 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0731 11:15:03.931735  100669 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0731 11:15:03.931752  100669 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0731 11:15:03.931760  100669 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0731 11:15:03.931819  100669 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:15:03.931854  100669 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:15:03.931874  100669 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:15:03.931903  100669 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0731 11:15:03.931914  100669 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 11:15:03.931972  100669 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 11:15:03.932061  100669 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> 156462.pem in /etc/ssl/certs
	I0731 11:15:03.932073  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /etc/ssl/certs/156462.pem
	I0731 11:15:03.932171  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:15:03.939634  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:15:03.961143  100669 start.go:303] post-start completed in 141.651948ms
	I0731 11:15:03.961457  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026-m02
	I0731 11:15:03.977699  100669 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/config.json ...
	I0731 11:15:03.977947  100669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:15:03.977998  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:03.994432  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:04.080152  100669 command_runner.go:130] > 22%!
	(MISSING)I0731 11:15:04.080369  100669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:15:04.084289  100669 command_runner.go:130] > 229G
	I0731 11:15:04.084438  100669 start.go:128] duration metric: createHost completed in 7.515589552s
	I0731 11:15:04.084461  100669 start.go:83] releasing machines lock for "multinode-249026-m02", held for 7.515736515s
	I0731 11:15:04.084529  100669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026-m02
	I0731 11:15:04.103324  100669 out.go:177] * Found network options:
	I0731 11:15:04.105023  100669 out.go:177]   - NO_PROXY=192.168.58.2
	W0731 11:15:04.106776  100669 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 11:15:04.106816  100669 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 11:15:04.106874  100669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:15:04.106908  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:04.106999  100669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:15:04.107063  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:04.123748  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:04.124816  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:04.300860  100669 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 11:15:04.344561  100669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:15:04.348548  100669 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0731 11:15:04.348578  100669 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0731 11:15:04.348592  100669 command_runner.go:130] > Device: b0h/176d	Inode: 552416      Links: 1
	I0731 11:15:04.348598  100669 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:15:04.348605  100669 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0731 11:15:04.348610  100669 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0731 11:15:04.348615  100669 command_runner.go:130] > Change: 2023-07-31 10:55:52.254677710 +0000
	I0731 11:15:04.348630  100669 command_runner.go:130] >  Birth: 2023-07-31 10:55:52.254677710 +0000
	I0731 11:15:04.348894  100669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:15:04.366797  100669 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:15:04.366864  100669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:15:04.392076  100669 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0731 11:15:04.392111  100669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0731 11:15:04.392120  100669 start.go:466] detecting cgroup driver to use...
	I0731 11:15:04.392146  100669 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:15:04.392182  100669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:15:04.405164  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:15:04.414842  100669 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:15:04.414888  100669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:15:04.426456  100669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:15:04.438252  100669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 11:15:04.508765  100669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:15:04.585056  100669 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0731 11:15:04.585088  100669 docker.go:212] disabling docker service ...
	I0731 11:15:04.585125  100669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:15:04.602770  100669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:15:04.612944  100669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:15:04.689844  100669 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0731 11:15:04.689922  100669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:15:04.700283  100669 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0731 11:15:04.765733  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:15:04.776417  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:15:04.790848  100669 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 11:15:04.790892  100669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 11:15:04.790942  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:15:04.799722  100669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 11:15:04.799779  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:15:04.808439  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:15:04.817106  100669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:15:04.825740  100669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 11:15:04.833786  100669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 11:15:04.841357  100669 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 11:15:04.841425  100669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 11:15:04.848630  100669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 11:15:04.919448  100669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 11:15:05.020542  100669 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 11:15:05.020610  100669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 11:15:05.023745  100669 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 11:15:05.023769  100669 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 11:15:05.023779  100669 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0731 11:15:05.023789  100669 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:15:05.023796  100669 command_runner.go:130] > Access: 2023-07-31 11:15:05.010353631 +0000
	I0731 11:15:05.023807  100669 command_runner.go:130] > Modify: 2023-07-31 11:15:05.010353631 +0000
	I0731 11:15:05.023820  100669 command_runner.go:130] > Change: 2023-07-31 11:15:05.010353631 +0000
	I0731 11:15:05.023825  100669 command_runner.go:130] >  Birth: -
	I0731 11:15:05.023846  100669 start.go:534] Will wait 60s for crictl version
	I0731 11:15:05.023900  100669 ssh_runner.go:195] Run: which crictl
	I0731 11:15:05.026886  100669 command_runner.go:130] > /usr/bin/crictl
	I0731 11:15:05.026974  100669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 11:15:05.056476  100669 command_runner.go:130] > Version:  0.1.0
	I0731 11:15:05.056498  100669 command_runner.go:130] > RuntimeName:  cri-o
	I0731 11:15:05.056503  100669 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0731 11:15:05.056508  100669 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 11:15:05.058502  100669 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0731 11:15:05.058561  100669 ssh_runner.go:195] Run: crio --version
	I0731 11:15:05.090081  100669 command_runner.go:130] > crio version 1.24.6
	I0731 11:15:05.090104  100669 command_runner.go:130] > Version:          1.24.6
	I0731 11:15:05.090115  100669 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 11:15:05.090123  100669 command_runner.go:130] > GitTreeState:     clean
	I0731 11:15:05.090136  100669 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 11:15:05.090149  100669 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 11:15:05.090156  100669 command_runner.go:130] > Compiler:         gc
	I0731 11:15:05.090168  100669 command_runner.go:130] > Platform:         linux/amd64
	I0731 11:15:05.090182  100669 command_runner.go:130] > Linkmode:         dynamic
	I0731 11:15:05.090197  100669 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 11:15:05.090208  100669 command_runner.go:130] > SeccompEnabled:   true
	I0731 11:15:05.090223  100669 command_runner.go:130] > AppArmorEnabled:  false
	I0731 11:15:05.091626  100669 ssh_runner.go:195] Run: crio --version
	I0731 11:15:05.122434  100669 command_runner.go:130] > crio version 1.24.6
	I0731 11:15:05.122453  100669 command_runner.go:130] > Version:          1.24.6
	I0731 11:15:05.122460  100669 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0731 11:15:05.122464  100669 command_runner.go:130] > GitTreeState:     clean
	I0731 11:15:05.122470  100669 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0731 11:15:05.122475  100669 command_runner.go:130] > GoVersion:        go1.18.2
	I0731 11:15:05.122479  100669 command_runner.go:130] > Compiler:         gc
	I0731 11:15:05.122484  100669 command_runner.go:130] > Platform:         linux/amd64
	I0731 11:15:05.122488  100669 command_runner.go:130] > Linkmode:         dynamic
	I0731 11:15:05.122497  100669 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0731 11:15:05.122501  100669 command_runner.go:130] > SeccompEnabled:   true
	I0731 11:15:05.122505  100669 command_runner.go:130] > AppArmorEnabled:  false
	I0731 11:15:05.125557  100669 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0731 11:15:05.126994  100669 out.go:177]   - env NO_PROXY=192.168.58.2
	I0731 11:15:05.128576  100669 cli_runner.go:164] Run: docker network inspect multinode-249026 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 11:15:05.144569  100669 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0731 11:15:05.147820  100669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:15:05.157684  100669 certs.go:56] Setting up /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026 for IP: 192.168.58.3
	I0731 11:15:05.157710  100669 certs.go:190] acquiring lock for shared ca certs: {Name:mkc3a3f248dbae88fa439f539f826d6e08b37eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:15:05.157841  100669 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key
	I0731 11:15:05.157879  100669 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key
	I0731 11:15:05.157891  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 11:15:05.157905  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 11:15:05.157917  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 11:15:05.157932  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 11:15:05.157973  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem (1338 bytes)
	W0731 11:15:05.157999  100669 certs.go:433] ignoring /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646_empty.pem, impossibly tiny 0 bytes
	I0731 11:15:05.158011  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 11:15:05.158034  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem (1082 bytes)
	I0731 11:15:05.158066  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem (1123 bytes)
	I0731 11:15:05.158088  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem (1675 bytes)
	I0731 11:15:05.158126  100669 certs.go:437] found cert: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:15:05.158154  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem -> /usr/share/ca-certificates/15646.pem
	I0731 11:15:05.158174  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> /usr/share/ca-certificates/156462.pem
	I0731 11:15:05.158186  100669 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:15:05.158473  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 11:15:05.179609  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 11:15:05.200527  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 11:15:05.221165  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 11:15:05.241406  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/15646.pem --> /usr/share/ca-certificates/15646.pem (1338 bytes)
	I0731 11:15:05.261461  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /usr/share/ca-certificates/156462.pem (1708 bytes)
	I0731 11:15:05.281164  100669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 11:15:05.301937  100669 ssh_runner.go:195] Run: openssl version
	I0731 11:15:05.306810  100669 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0731 11:15:05.306897  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/156462.pem && ln -fs /usr/share/ca-certificates/156462.pem /etc/ssl/certs/156462.pem"
	I0731 11:15:05.315341  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/156462.pem
	I0731 11:15:05.318537  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 11:01 /usr/share/ca-certificates/156462.pem
	I0731 11:15:05.318567  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 31 11:01 /usr/share/ca-certificates/156462.pem
	I0731 11:15:05.318611  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/156462.pem
	I0731 11:15:05.324715  100669 command_runner.go:130] > 3ec20f2e
	I0731 11:15:05.324783  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/156462.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 11:15:05.332976  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 11:15:05.341219  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:15:05.344306  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:15:05.344353  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 31 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:15:05.344398  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 11:15:05.350263  100669 command_runner.go:130] > b5213941
	I0731 11:15:05.350489  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 11:15:05.359052  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15646.pem && ln -fs /usr/share/ca-certificates/15646.pem /etc/ssl/certs/15646.pem"
	I0731 11:15:05.367208  100669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15646.pem
	I0731 11:15:05.370176  100669 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 11:01 /usr/share/ca-certificates/15646.pem
	I0731 11:15:05.370197  100669 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 31 11:01 /usr/share/ca-certificates/15646.pem
	I0731 11:15:05.370245  100669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15646.pem
	I0731 11:15:05.376139  100669 command_runner.go:130] > 51391683
	I0731 11:15:05.376325  100669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15646.pem /etc/ssl/certs/51391683.0"
	I0731 11:15:05.384533  100669 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0731 11:15:05.387384  100669 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:15:05.387413  100669 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0731 11:15:05.387479  100669 ssh_runner.go:195] Run: crio config
	I0731 11:15:05.420650  100669 command_runner.go:130] ! time="2023-07-31 11:15:05.420272562Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0731 11:15:05.420687  100669 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 11:15:05.425320  100669 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 11:15:05.425345  100669 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 11:15:05.425355  100669 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 11:15:05.425359  100669 command_runner.go:130] > #
	I0731 11:15:05.425370  100669 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 11:15:05.425380  100669 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 11:15:05.425395  100669 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 11:15:05.425412  100669 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 11:15:05.425419  100669 command_runner.go:130] > # reload'.
	I0731 11:15:05.425434  100669 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 11:15:05.425448  100669 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 11:15:05.425462  100669 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 11:15:05.425476  100669 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 11:15:05.425485  100669 command_runner.go:130] > [crio]
	I0731 11:15:05.425497  100669 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 11:15:05.425508  100669 command_runner.go:130] > # containers images, in this directory.
	I0731 11:15:05.425523  100669 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0731 11:15:05.425537  100669 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 11:15:05.425549  100669 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0731 11:15:05.425563  100669 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 11:15:05.425575  100669 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 11:15:05.425586  100669 command_runner.go:130] > # storage_driver = "vfs"
	I0731 11:15:05.425598  100669 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 11:15:05.425611  100669 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 11:15:05.425622  100669 command_runner.go:130] > # storage_option = [
	I0731 11:15:05.425631  100669 command_runner.go:130] > # ]
	I0731 11:15:05.425646  100669 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 11:15:05.425659  100669 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 11:15:05.425667  100669 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 11:15:05.425680  100669 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 11:15:05.425694  100669 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 11:15:05.425705  100669 command_runner.go:130] > # always happen on a node reboot
	I0731 11:15:05.425714  100669 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 11:15:05.425730  100669 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 11:15:05.425743  100669 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 11:15:05.425757  100669 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 11:15:05.425770  100669 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0731 11:15:05.425787  100669 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 11:15:05.425805  100669 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 11:15:05.425815  100669 command_runner.go:130] > # internal_wipe = true
	I0731 11:15:05.425827  100669 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 11:15:05.425838  100669 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 11:15:05.425852  100669 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 11:15:05.425864  100669 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 11:15:05.425878  100669 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 11:15:05.425888  100669 command_runner.go:130] > [crio.api]
	I0731 11:15:05.425899  100669 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 11:15:05.425909  100669 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 11:15:05.425919  100669 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 11:15:05.425929  100669 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 11:15:05.425942  100669 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 11:15:05.425954  100669 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 11:15:05.425964  100669 command_runner.go:130] > # stream_port = "0"
	I0731 11:15:05.425977  100669 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 11:15:05.425987  100669 command_runner.go:130] > # stream_enable_tls = false
	I0731 11:15:05.426000  100669 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 11:15:05.426010  100669 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 11:15:05.426024  100669 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 11:15:05.426038  100669 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 11:15:05.426048  100669 command_runner.go:130] > # minutes.
	I0731 11:15:05.426057  100669 command_runner.go:130] > # stream_tls_cert = ""
	I0731 11:15:05.426070  100669 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 11:15:05.426089  100669 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 11:15:05.426100  100669 command_runner.go:130] > # stream_tls_key = ""
	I0731 11:15:05.426114  100669 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 11:15:05.426128  100669 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 11:15:05.426143  100669 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 11:15:05.426153  100669 command_runner.go:130] > # stream_tls_ca = ""
	I0731 11:15:05.426169  100669 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 11:15:05.426178  100669 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0731 11:15:05.426194  100669 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0731 11:15:05.426205  100669 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0731 11:15:05.426232  100669 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 11:15:05.426246  100669 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 11:15:05.426252  100669 command_runner.go:130] > [crio.runtime]
	I0731 11:15:05.426263  100669 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 11:15:05.426276  100669 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 11:15:05.426287  100669 command_runner.go:130] > # "nofile=1024:2048"
	I0731 11:15:05.426301  100669 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 11:15:05.426311  100669 command_runner.go:130] > # default_ulimits = [
	I0731 11:15:05.426319  100669 command_runner.go:130] > # ]
	I0731 11:15:05.426331  100669 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 11:15:05.426340  100669 command_runner.go:130] > # no_pivot = false
	I0731 11:15:05.426350  100669 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 11:15:05.426364  100669 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 11:15:05.426376  100669 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 11:15:05.426390  100669 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 11:15:05.426402  100669 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 11:15:05.426417  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 11:15:05.426426  100669 command_runner.go:130] > # conmon = ""
	I0731 11:15:05.426434  100669 command_runner.go:130] > # Cgroup setting for conmon
	I0731 11:15:05.426449  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 11:15:05.426459  100669 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 11:15:05.426474  100669 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 11:15:05.426486  100669 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 11:15:05.426501  100669 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 11:15:05.426510  100669 command_runner.go:130] > # conmon_env = [
	I0731 11:15:05.426516  100669 command_runner.go:130] > # ]
	I0731 11:15:05.426526  100669 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 11:15:05.426538  100669 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 11:15:05.426551  100669 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 11:15:05.426561  100669 command_runner.go:130] > # default_env = [
	I0731 11:15:05.426570  100669 command_runner.go:130] > # ]
	I0731 11:15:05.426580  100669 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 11:15:05.426591  100669 command_runner.go:130] > # selinux = false
	I0731 11:15:05.426604  100669 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 11:15:05.426618  100669 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 11:15:05.426632  100669 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 11:15:05.426643  100669 command_runner.go:130] > # seccomp_profile = ""
	I0731 11:15:05.426654  100669 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 11:15:05.426668  100669 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 11:15:05.426682  100669 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 11:15:05.426691  100669 command_runner.go:130] > # which might increase security.
	I0731 11:15:05.426700  100669 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0731 11:15:05.426715  100669 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 11:15:05.426729  100669 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 11:15:05.426743  100669 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 11:15:05.426757  100669 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 11:15:05.426769  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:15:05.426779  100669 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 11:15:05.426790  100669 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 11:15:05.426801  100669 command_runner.go:130] > # the cgroup blockio controller.
	I0731 11:15:05.426812  100669 command_runner.go:130] > # blockio_config_file = ""
	I0731 11:15:05.426824  100669 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 11:15:05.426834  100669 command_runner.go:130] > # irqbalance daemon.
	I0731 11:15:05.426845  100669 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 11:15:05.426859  100669 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 11:15:05.426870  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:15:05.426878  100669 command_runner.go:130] > # rdt_config_file = ""
	I0731 11:15:05.426890  100669 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 11:15:05.426901  100669 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 11:15:05.426915  100669 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 11:15:05.426925  100669 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 11:15:05.426940  100669 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 11:15:05.426953  100669 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 11:15:05.426963  100669 command_runner.go:130] > # will be added.
	I0731 11:15:05.426974  100669 command_runner.go:130] > # default_capabilities = [
	I0731 11:15:05.426983  100669 command_runner.go:130] > # 	"CHOWN",
	I0731 11:15:05.426991  100669 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 11:15:05.427000  100669 command_runner.go:130] > # 	"FSETID",
	I0731 11:15:05.427009  100669 command_runner.go:130] > # 	"FOWNER",
	I0731 11:15:05.427018  100669 command_runner.go:130] > # 	"SETGID",
	I0731 11:15:05.427027  100669 command_runner.go:130] > # 	"SETUID",
	I0731 11:15:05.427034  100669 command_runner.go:130] > # 	"SETPCAP",
	I0731 11:15:05.427044  100669 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 11:15:05.427053  100669 command_runner.go:130] > # 	"KILL",
	I0731 11:15:05.427062  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427083  100669 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 11:15:05.427097  100669 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 11:15:05.427109  100669 command_runner.go:130] > # add_inheritable_capabilities = true
	I0731 11:15:05.427123  100669 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 11:15:05.427135  100669 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 11:15:05.427145  100669 command_runner.go:130] > # default_sysctls = [
	I0731 11:15:05.427154  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427163  100669 command_runner.go:130] > # List of devices on the host that a
	I0731 11:15:05.427177  100669 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 11:15:05.427188  100669 command_runner.go:130] > # allowed_devices = [
	I0731 11:15:05.427197  100669 command_runner.go:130] > # 	"/dev/fuse",
	I0731 11:15:05.427204  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427213  100669 command_runner.go:130] > # List of additional devices. specified as
	I0731 11:15:05.427243  100669 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 11:15:05.427256  100669 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 11:15:05.427270  100669 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 11:15:05.427280  100669 command_runner.go:130] > # additional_devices = [
	I0731 11:15:05.427287  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427299  100669 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 11:15:05.427310  100669 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 11:15:05.427319  100669 command_runner.go:130] > # 	"/etc/cdi",
	I0731 11:15:05.427327  100669 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 11:15:05.427335  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427347  100669 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 11:15:05.427361  100669 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 11:15:05.427371  100669 command_runner.go:130] > # Defaults to false.
	I0731 11:15:05.427380  100669 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 11:15:05.427394  100669 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 11:15:05.427408  100669 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 11:15:05.427417  100669 command_runner.go:130] > # hooks_dir = [
	I0731 11:15:05.427429  100669 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 11:15:05.427438  100669 command_runner.go:130] > # ]
	I0731 11:15:05.427449  100669 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 11:15:05.427463  100669 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 11:15:05.427473  100669 command_runner.go:130] > # its default mounts from the following two files:
	I0731 11:15:05.427481  100669 command_runner.go:130] > #
	I0731 11:15:05.427493  100669 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 11:15:05.427507  100669 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 11:15:05.427520  100669 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 11:15:05.427528  100669 command_runner.go:130] > #
	I0731 11:15:05.427539  100669 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 11:15:05.427553  100669 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 11:15:05.427568  100669 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 11:15:05.427579  100669 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 11:15:05.427585  100669 command_runner.go:130] > #
	I0731 11:15:05.427595  100669 command_runner.go:130] > # default_mounts_file = ""
	I0731 11:15:05.427607  100669 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 11:15:05.427623  100669 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 11:15:05.427633  100669 command_runner.go:130] > # pids_limit = 0
	I0731 11:15:05.427647  100669 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 11:15:05.427661  100669 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 11:15:05.427675  100669 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 11:15:05.427690  100669 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 11:15:05.427697  100669 command_runner.go:130] > # log_size_max = -1
	I0731 11:15:05.427707  100669 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0731 11:15:05.427717  100669 command_runner.go:130] > # log_to_journald = false
	I0731 11:15:05.427728  100669 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 11:15:05.427738  100669 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 11:15:05.427747  100669 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 11:15:05.427757  100669 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 11:15:05.427767  100669 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 11:15:05.427775  100669 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 11:15:05.427783  100669 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 11:15:05.427792  100669 command_runner.go:130] > # read_only = false
	I0731 11:15:05.427803  100669 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 11:15:05.427815  100669 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 11:15:05.427826  100669 command_runner.go:130] > # live configuration reload.
	I0731 11:15:05.427832  100669 command_runner.go:130] > # log_level = "info"
	I0731 11:15:05.427839  100669 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 11:15:05.427849  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:15:05.427860  100669 command_runner.go:130] > # log_filter = ""
	I0731 11:15:05.427871  100669 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 11:15:05.427903  100669 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 11:15:05.427916  100669 command_runner.go:130] > # separated by comma.
	I0731 11:15:05.427928  100669 command_runner.go:130] > # uid_mappings = ""
	I0731 11:15:05.427939  100669 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 11:15:05.427945  100669 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 11:15:05.427951  100669 command_runner.go:130] > # separated by comma.
	I0731 11:15:05.427955  100669 command_runner.go:130] > # gid_mappings = ""
	I0731 11:15:05.427964  100669 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 11:15:05.427973  100669 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 11:15:05.427986  100669 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 11:15:05.427997  100669 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 11:15:05.428008  100669 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 11:15:05.428021  100669 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 11:15:05.428033  100669 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 11:15:05.428043  100669 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 11:15:05.428050  100669 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 11:15:05.428058  100669 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 11:15:05.428068  100669 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 11:15:05.428084  100669 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 11:15:05.428098  100669 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 11:15:05.428116  100669 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 11:15:05.428128  100669 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 11:15:05.428140  100669 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 11:15:05.428147  100669 command_runner.go:130] > # drop_infra_ctr = true
	I0731 11:15:05.428154  100669 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 11:15:05.428161  100669 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 11:15:05.428169  100669 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 11:15:05.428175  100669 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 11:15:05.428183  100669 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 11:15:05.428190  100669 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 11:15:05.428195  100669 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 11:15:05.428204  100669 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 11:15:05.428211  100669 command_runner.go:130] > # pinns_path = ""
	I0731 11:15:05.428217  100669 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 11:15:05.428226  100669 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0731 11:15:05.428234  100669 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0731 11:15:05.428241  100669 command_runner.go:130] > # default_runtime = "runc"
	I0731 11:15:05.428246  100669 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 11:15:05.428255  100669 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 11:15:05.428267  100669 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0731 11:15:05.428273  100669 command_runner.go:130] > # creation as a file is not desired either.
	I0731 11:15:05.428281  100669 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 11:15:05.428298  100669 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 11:15:05.428305  100669 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 11:15:05.428309  100669 command_runner.go:130] > # ]
	I0731 11:15:05.428317  100669 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 11:15:05.428325  100669 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 11:15:05.428334  100669 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0731 11:15:05.428341  100669 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0731 11:15:05.428347  100669 command_runner.go:130] > #
	I0731 11:15:05.428351  100669 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0731 11:15:05.428356  100669 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0731 11:15:05.428363  100669 command_runner.go:130] > #  runtime_type = "oci"
	I0731 11:15:05.428367  100669 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0731 11:15:05.428374  100669 command_runner.go:130] > #  privileged_without_host_devices = false
	I0731 11:15:05.428379  100669 command_runner.go:130] > #  allowed_annotations = []
	I0731 11:15:05.428384  100669 command_runner.go:130] > # Where:
	I0731 11:15:05.428389  100669 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0731 11:15:05.428397  100669 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0731 11:15:05.428406  100669 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 11:15:05.428414  100669 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 11:15:05.428421  100669 command_runner.go:130] > #   in $PATH.
	I0731 11:15:05.428427  100669 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0731 11:15:05.428433  100669 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 11:15:05.428441  100669 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0731 11:15:05.428448  100669 command_runner.go:130] > #   state.
	I0731 11:15:05.428454  100669 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 11:15:05.428462  100669 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 11:15:05.428468  100669 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 11:15:05.428476  100669 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 11:15:05.428485  100669 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 11:15:05.428493  100669 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 11:15:05.428500  100669 command_runner.go:130] > #   The currently recognized values are:
	I0731 11:15:05.428506  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 11:15:05.428515  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 11:15:05.428527  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 11:15:05.428541  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 11:15:05.428553  100669 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 11:15:05.428562  100669 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 11:15:05.428570  100669 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 11:15:05.428581  100669 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0731 11:15:05.428590  100669 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 11:15:05.428596  100669 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 11:15:05.428601  100669 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0731 11:15:05.428609  100669 command_runner.go:130] > runtime_type = "oci"
	I0731 11:15:05.428617  100669 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 11:15:05.428628  100669 command_runner.go:130] > runtime_config_path = ""
	I0731 11:15:05.428638  100669 command_runner.go:130] > monitor_path = ""
	I0731 11:15:05.428647  100669 command_runner.go:130] > monitor_cgroup = ""
	I0731 11:15:05.428653  100669 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 11:15:05.428681  100669 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0731 11:15:05.428689  100669 command_runner.go:130] > # running containers
	I0731 11:15:05.428693  100669 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0731 11:15:05.428705  100669 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0731 11:15:05.428716  100669 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0731 11:15:05.428730  100669 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0731 11:15:05.428741  100669 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0731 11:15:05.428749  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0731 11:15:05.428754  100669 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0731 11:15:05.428760  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0731 11:15:05.428765  100669 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0731 11:15:05.428772  100669 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0731 11:15:05.428778  100669 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 11:15:05.428789  100669 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 11:15:05.428801  100669 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 11:15:05.428817  100669 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 11:15:05.428834  100669 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 11:15:05.428847  100669 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 11:15:05.428859  100669 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 11:15:05.428868  100669 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 11:15:05.428882  100669 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 11:15:05.428898  100669 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 11:15:05.428907  100669 command_runner.go:130] > # Example:
	I0731 11:15:05.428917  100669 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 11:15:05.428928  100669 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 11:15:05.428939  100669 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 11:15:05.428949  100669 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 11:15:05.428953  100669 command_runner.go:130] > # cpuset = 0
	I0731 11:15:05.428962  100669 command_runner.go:130] > # cpushares = "0-1"
	I0731 11:15:05.428972  100669 command_runner.go:130] > # Where:
	I0731 11:15:05.428983  100669 command_runner.go:130] > # The workload name is workload-type.
	I0731 11:15:05.428997  100669 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 11:15:05.429009  100669 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 11:15:05.429021  100669 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 11:15:05.429035  100669 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 11:15:05.429045  100669 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 11:15:05.429053  100669 command_runner.go:130] > # 
	I0731 11:15:05.429069  100669 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 11:15:05.429082  100669 command_runner.go:130] > #
	I0731 11:15:05.429094  100669 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 11:15:05.429108  100669 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 11:15:05.429120  100669 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 11:15:05.429126  100669 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 11:15:05.429138  100669 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 11:15:05.429148  100669 command_runner.go:130] > [crio.image]
	I0731 11:15:05.429158  100669 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 11:15:05.429169  100669 command_runner.go:130] > # default_transport = "docker://"
	I0731 11:15:05.429182  100669 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 11:15:05.429195  100669 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 11:15:05.429205  100669 command_runner.go:130] > # global_auth_file = ""
	I0731 11:15:05.429210  100669 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 11:15:05.429218  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:15:05.429223  100669 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0731 11:15:05.429232  100669 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 11:15:05.429239  100669 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 11:15:05.429246  100669 command_runner.go:130] > # This option supports live configuration reload.
	I0731 11:15:05.429251  100669 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 11:15:05.429259  100669 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 11:15:05.429267  100669 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 11:15:05.429276  100669 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 11:15:05.429283  100669 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 11:15:05.429290  100669 command_runner.go:130] > # pause_command = "/pause"
	I0731 11:15:05.429296  100669 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 11:15:05.429304  100669 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 11:15:05.429312  100669 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 11:15:05.429320  100669 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 11:15:05.429327  100669 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 11:15:05.429334  100669 command_runner.go:130] > # signature_policy = ""
	I0731 11:15:05.429344  100669 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 11:15:05.429353  100669 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 11:15:05.429359  100669 command_runner.go:130] > # changing them here.
	I0731 11:15:05.429363  100669 command_runner.go:130] > # insecure_registries = [
	I0731 11:15:05.429368  100669 command_runner.go:130] > # ]
	I0731 11:15:05.429375  100669 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 11:15:05.429382  100669 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 11:15:05.429388  100669 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 11:15:05.429393  100669 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 11:15:05.429400  100669 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 11:15:05.429406  100669 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 11:15:05.429412  100669 command_runner.go:130] > # CNI plugins.
	I0731 11:15:05.429416  100669 command_runner.go:130] > [crio.network]
	I0731 11:15:05.429425  100669 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 11:15:05.429432  100669 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 11:15:05.429439  100669 command_runner.go:130] > # cni_default_network = ""
	I0731 11:15:05.429444  100669 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 11:15:05.429451  100669 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 11:15:05.429456  100669 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 11:15:05.429462  100669 command_runner.go:130] > # plugin_dirs = [
	I0731 11:15:05.429467  100669 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 11:15:05.429472  100669 command_runner.go:130] > # ]
	I0731 11:15:05.429478  100669 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 11:15:05.429484  100669 command_runner.go:130] > [crio.metrics]
	I0731 11:15:05.429489  100669 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 11:15:05.429495  100669 command_runner.go:130] > # enable_metrics = false
	I0731 11:15:05.429500  100669 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 11:15:05.429507  100669 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 11:15:05.429513  100669 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 11:15:05.429521  100669 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 11:15:05.429529  100669 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 11:15:05.429535  100669 command_runner.go:130] > # metrics_collectors = [
	I0731 11:15:05.429539  100669 command_runner.go:130] > # 	"operations",
	I0731 11:15:05.429546  100669 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 11:15:05.429551  100669 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 11:15:05.429557  100669 command_runner.go:130] > # 	"operations_errors",
	I0731 11:15:05.429561  100669 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 11:15:05.429567  100669 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 11:15:05.429571  100669 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 11:15:05.429579  100669 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 11:15:05.429586  100669 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 11:15:05.429590  100669 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 11:15:05.429596  100669 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 11:15:05.429600  100669 command_runner.go:130] > # 	"containers_oom_total",
	I0731 11:15:05.429606  100669 command_runner.go:130] > # 	"containers_oom",
	I0731 11:15:05.429610  100669 command_runner.go:130] > # 	"processes_defunct",
	I0731 11:15:05.429617  100669 command_runner.go:130] > # 	"operations_total",
	I0731 11:15:05.429621  100669 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 11:15:05.429628  100669 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 11:15:05.429632  100669 command_runner.go:130] > # 	"operations_errors_total",
	I0731 11:15:05.429638  100669 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 11:15:05.429643  100669 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 11:15:05.429649  100669 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 11:15:05.429654  100669 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 11:15:05.429660  100669 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 11:15:05.429664  100669 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 11:15:05.429670  100669 command_runner.go:130] > # ]
	I0731 11:15:05.429676  100669 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 11:15:05.429681  100669 command_runner.go:130] > # metrics_port = 9090
	I0731 11:15:05.429687  100669 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 11:15:05.429692  100669 command_runner.go:130] > # metrics_socket = ""
	I0731 11:15:05.429697  100669 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 11:15:05.429706  100669 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 11:15:05.429712  100669 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 11:15:05.429719  100669 command_runner.go:130] > # certificate on any modification event.
	I0731 11:15:05.429723  100669 command_runner.go:130] > # metrics_cert = ""
	I0731 11:15:05.429730  100669 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 11:15:05.429737  100669 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 11:15:05.429741  100669 command_runner.go:130] > # metrics_key = ""
	I0731 11:15:05.429749  100669 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 11:15:05.429756  100669 command_runner.go:130] > [crio.tracing]
	I0731 11:15:05.429761  100669 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 11:15:05.429767  100669 command_runner.go:130] > # enable_tracing = false
	I0731 11:15:05.429772  100669 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 11:15:05.429779  100669 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 11:15:05.429784  100669 command_runner.go:130] > # Number of samples to collect per million spans.
	I0731 11:15:05.429791  100669 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 11:15:05.429796  100669 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 11:15:05.429802  100669 command_runner.go:130] > [crio.stats]
	I0731 11:15:05.429808  100669 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 11:15:05.429815  100669 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 11:15:05.429822  100669 command_runner.go:130] > # stats_collection_period = 0
	I0731 11:15:05.429876  100669 cni.go:84] Creating CNI manager for ""
	I0731 11:15:05.429884  100669 cni.go:136] 2 nodes found, recommending kindnet
	I0731 11:15:05.429892  100669 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0731 11:15:05.429911  100669 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-249026 NodeName:multinode-249026-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 11:15:05.430016  100669 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-249026-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 11:15:05.430062  100669 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-249026-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0731 11:15:05.430126  100669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0731 11:15:05.437405  100669 command_runner.go:130] > kubeadm
	I0731 11:15:05.437430  100669 command_runner.go:130] > kubectl
	I0731 11:15:05.437436  100669 command_runner.go:130] > kubelet
	I0731 11:15:05.438061  100669 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 11:15:05.438122  100669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0731 11:15:05.445571  100669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0731 11:15:05.460974  100669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 11:15:05.476262  100669 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0731 11:15:05.479230  100669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 11:15:05.488744  100669 host.go:66] Checking if "multinode-249026" exists ...
	I0731 11:15:05.488947  100669 start.go:301] JoinCluster: &{Name:multinode-249026 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-249026 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:15:05.489022  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 11:15:05.489057  100669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:15:05.489000  100669 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:15:05.505565  100669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:15:05.645020  100669 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4r0n40.oq7amwr9erkn9mcg --discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd 
	I0731 11:15:05.645082  100669 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 11:15:05.645121  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4r0n40.oq7amwr9erkn9mcg --discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-249026-m02"
	I0731 11:15:05.677807  100669 command_runner.go:130] > [preflight] Running pre-flight checks
	I0731 11:15:05.704767  100669 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0731 11:15:05.704797  100669 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1038-gcp
	I0731 11:15:05.704806  100669 command_runner.go:130] > OS: Linux
	I0731 11:15:05.704812  100669 command_runner.go:130] > CGROUPS_CPU: enabled
	I0731 11:15:05.704818  100669 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0731 11:15:05.704823  100669 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0731 11:15:05.704831  100669 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0731 11:15:05.704838  100669 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0731 11:15:05.704847  100669 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0731 11:15:05.704868  100669 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0731 11:15:05.704879  100669 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0731 11:15:05.704884  100669 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0731 11:15:05.783498  100669 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0731 11:15:05.783533  100669 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0731 11:15:05.808667  100669 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 11:15:05.808725  100669 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 11:15:05.808738  100669 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0731 11:15:05.885286  100669 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0731 11:15:08.398163  100669 command_runner.go:130] > This node has joined the cluster:
	I0731 11:15:08.398186  100669 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0731 11:15:08.398192  100669 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0731 11:15:08.398199  100669 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0731 11:15:08.400645  100669 command_runner.go:130] ! W0731 11:15:05.677370    1111 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0731 11:15:08.400671  100669 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1038-gcp\n", err: exit status 1
	I0731 11:15:08.400682  100669 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 11:15:08.400714  100669 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4r0n40.oq7amwr9erkn9mcg --discovery-token-ca-cert-hash sha256:293b68dd99d5c75256004a8ddc8637ea08a1940f52c1b0e6476e24cc10aea3dd --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-249026-m02": (2.755564232s)
	I0731 11:15:08.400737  100669 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 11:15:08.561443  100669 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0731 11:15:08.561476  100669 start.go:303] JoinCluster complete in 3.072527382s
	I0731 11:15:08.561492  100669 cni.go:84] Creating CNI manager for ""
	I0731 11:15:08.561499  100669 cni.go:136] 2 nodes found, recommending kindnet
	I0731 11:15:08.561548  100669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 11:15:08.565041  100669 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0731 11:15:08.565064  100669 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0731 11:15:08.565075  100669 command_runner.go:130] > Device: 37h/55d	Inode: 556174      Links: 1
	I0731 11:15:08.565083  100669 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 11:15:08.565091  100669 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0731 11:15:08.565102  100669 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0731 11:15:08.565110  100669 command_runner.go:130] > Change: 2023-07-31 10:55:52.654716471 +0000
	I0731 11:15:08.565120  100669 command_runner.go:130] >  Birth: 2023-07-31 10:55:52.630714145 +0000
	I0731 11:15:08.565171  100669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0731 11:15:08.565185  100669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 11:15:08.581684  100669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 11:15:08.814244  100669 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0731 11:15:08.817587  100669 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0731 11:15:08.820029  100669 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0731 11:15:08.829984  100669 command_runner.go:130] > daemonset.apps/kindnet configured
	I0731 11:15:08.834221  100669 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:15:08.834443  100669 kapi.go:59] client config for multinode-249026: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:15:08.834728  100669 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0731 11:15:08.834740  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:08.834748  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:08.834754  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:08.836698  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:08.836716  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:08.836723  100669 round_trippers.go:580]     Audit-Id: 6c86ef8f-900e-4ca9-9d55-c6930fa05338
	I0731 11:15:08.836729  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:08.836735  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:08.836740  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:08.836749  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:08.836755  100669 round_trippers.go:580]     Content-Length: 291
	I0731 11:15:08.836763  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:08 GMT
	I0731 11:15:08.836784  100669 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cf2497d0-8d01-4f65-8f7c-13691a19b413","resourceVersion":"421","creationTimestamp":"2023-07-31T11:14:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0731 11:15:08.836874  100669 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-249026" context rescaled to 1 replicas
	I0731 11:15:08.836900  100669 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0731 11:15:08.839320  100669 out.go:177] * Verifying Kubernetes components...
	I0731 11:15:08.840846  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:15:08.851606  100669 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:15:08.851963  100669 kapi.go:59] client config for multinode-249026: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.crt", KeyFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/profiles/multinode-249026/client.key", CAFile:"/home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 11:15:08.852283  100669 node_ready.go:35] waiting up to 6m0s for node "multinode-249026-m02" to be "Ready" ...
	I0731 11:15:08.852352  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:08.852363  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:08.852374  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:08.852386  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:08.854434  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:08.854448  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:08.854455  100669 round_trippers.go:580]     Audit-Id: 1fc5f4da-1046-4c7e-875e-e2a7b50b21de
	I0731 11:15:08.854460  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:08.854465  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:08.854470  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:08.854475  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:08.854481  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:08 GMT
	I0731 11:15:08.854582  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"457","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0731 11:15:08.854925  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:08.854938  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:08.854949  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:08.854955  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:08.857106  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:08.857124  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:08.857130  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:08 GMT
	I0731 11:15:08.857136  100669 round_trippers.go:580]     Audit-Id: 83286970-8dcf-4123-83d2-4121ab8acaac
	I0731 11:15:08.857142  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:08.857147  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:08.857152  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:08.857162  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:08.857243  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"457","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0731 11:15:09.358265  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:09.358298  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:09.358307  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:09.358313  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:09.360711  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:09.360739  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:09.360751  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:09.360761  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:09 GMT
	I0731 11:15:09.360771  100669 round_trippers.go:580]     Audit-Id: a90b6647-4373-4470-b8ba-c41d59b3411f
	I0731 11:15:09.360784  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:09.360792  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:09.360802  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:09.360898  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"457","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0731 11:15:09.858504  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:09.858523  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:09.858532  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:09.858538  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:09.860904  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:09.860933  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:09.860946  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:09.860973  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:09 GMT
	I0731 11:15:09.860990  100669 round_trippers.go:580]     Audit-Id: b192c7be-3425-4709-a3a0-b5c1a00d8589
	I0731 11:15:09.860999  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:09.861006  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:09.861013  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:09.861179  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"457","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0731 11:15:10.357649  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:10.357668  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.357676  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.357682  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.360449  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:10.360477  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.360493  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.360503  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.360511  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.360520  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.360530  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.360542  100669 round_trippers.go:580]     Audit-Id: e01b5d85-4b7d-4a28-bf5b-778b848bebc2
	I0731 11:15:10.360667  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"457","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0731 11:15:10.858165  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:10.858190  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.858198  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.858204  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.860358  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:10.860380  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.860417  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.860430  100669 round_trippers.go:580]     Audit-Id: cf65ef39-16de-4c69-9b61-48d77f7af735
	I0731 11:15:10.860440  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.860449  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.860457  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.860463  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.860565  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"474","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5176 chars]
	I0731 11:15:10.860861  100669 node_ready.go:49] node "multinode-249026-m02" has status "Ready":"True"
	I0731 11:15:10.860877  100669 node_ready.go:38] duration metric: took 2.008576804s waiting for node "multinode-249026-m02" to be "Ready" ...
	I0731 11:15:10.860887  100669 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:15:10.860946  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0731 11:15:10.860956  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.860967  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.860979  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.864017  100669 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 11:15:10.864037  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.864048  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.864058  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.864079  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.864100  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.864110  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.864122  100669 round_trippers.go:580]     Audit-Id: 1e56f0e1-0d59-4cae-b20f-29a6ebd84fab
	I0731 11:15:10.866471  100669 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"417","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0731 11:15:10.868657  100669 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.868718  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z57mv
	I0731 11:15:10.868726  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.868733  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.868739  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.870647  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.870662  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.870672  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.870681  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.870693  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.870698  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.870705  100669 round_trippers.go:580]     Audit-Id: 1d3d099c-2a70-478a-9b43-f8cc5c895e41
	I0731 11:15:10.870710  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.870793  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z57mv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"10ff228c-5c0d-4012-8b2c-79ff8210e4e1","resourceVersion":"417","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a9fa5001-05d6-48d3-ac30-b77811f7aa33","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9fa5001-05d6-48d3-ac30-b77811f7aa33\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0731 11:15:10.871169  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:10.871180  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.871187  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.871196  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.872895  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.872910  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.872917  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.872923  100669 round_trippers.go:580]     Audit-Id: b1ec2431-8d9b-422b-93d5-3f78f6d5098d
	I0731 11:15:10.872931  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.872943  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.872953  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.872966  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.873053  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:10.873313  100669 pod_ready.go:92] pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:10.873324  100669 pod_ready.go:81] duration metric: took 4.647527ms waiting for pod "coredns-5d78c9869d-z57mv" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.873333  100669 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.873369  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-249026
	I0731 11:15:10.873376  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.873382  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.873388  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.875094  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.875112  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.875119  100669 round_trippers.go:580]     Audit-Id: 2d78bffd-2fec-4460-bac8-ddd933a2e774
	I0731 11:15:10.875124  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.875130  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.875135  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.875141  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.875148  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.875240  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-249026","namespace":"kube-system","uid":"2fd5af09-4d3d-44e9-a37e-9cfd2a7def67","resourceVersion":"388","creationTimestamp":"2023-07-31T11:14:08Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"5e9e96d0b4488e99389e41cddc8a43f6","kubernetes.io/config.mirror":"5e9e96d0b4488e99389e41cddc8a43f6","kubernetes.io/config.seen":"2023-07-31T11:14:02.981941512Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0731 11:15:10.875547  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:10.875557  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.875564  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.875571  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.877355  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.877374  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.877384  100669 round_trippers.go:580]     Audit-Id: 8e26c1a7-547b-4800-81a4-dee37309188f
	I0731 11:15:10.877392  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.877405  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.877418  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.877430  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.877442  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.877551  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:10.877829  100669 pod_ready.go:92] pod "etcd-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:10.877841  100669 pod_ready.go:81] duration metric: took 4.503027ms waiting for pod "etcd-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.877854  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.877890  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-249026
	I0731 11:15:10.877897  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.877904  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.877910  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.879647  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.879662  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.879673  100669 round_trippers.go:580]     Audit-Id: 7f30813e-6d8f-49ac-821d-a0e10bb1a291
	I0731 11:15:10.879683  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.879692  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.879704  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.879711  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.879719  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.879842  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-249026","namespace":"kube-system","uid":"0978e9cd-dfdc-4299-b370-eecc072de5cd","resourceVersion":"390","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"fbdd26f0ce94907fb765628c686054b9","kubernetes.io/config.mirror":"fbdd26f0ce94907fb765628c686054b9","kubernetes.io/config.seen":"2023-07-31T11:14:09.344025425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0731 11:15:10.880260  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:10.880274  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.880281  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.880287  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.881902  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.881921  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.881931  100669 round_trippers.go:580]     Audit-Id: adb1fd57-af2d-43e3-9d2e-f35920992de1
	I0731 11:15:10.881939  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.881947  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.881956  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.881972  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.881980  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.882063  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:10.882341  100669 pod_ready.go:92] pod "kube-apiserver-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:10.882353  100669 pod_ready.go:81] duration metric: took 4.494363ms waiting for pod "kube-apiserver-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.882361  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.882402  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-249026
	I0731 11:15:10.882409  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.882416  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.882422  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.884028  100669 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 11:15:10.884044  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.884051  100669 round_trippers.go:580]     Audit-Id: bf64a6e1-a3df-44a5-bd23-ff15fb2aa6ee
	I0731 11:15:10.884060  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.884069  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.884078  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.884092  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.884105  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.884236  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-249026","namespace":"kube-system","uid":"0b6e8e07-eb1c-4e59-b299-3983e587571c","resourceVersion":"389","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26066d183dc51193d263b4e20f2cec66","kubernetes.io/config.mirror":"26066d183dc51193d263b4e20f2cec66","kubernetes.io/config.seen":"2023-07-31T11:14:09.344026956Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0731 11:15:10.884586  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:10.884598  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:10.884605  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:10.884611  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:10.887761  100669 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 11:15:10.887778  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:10.887787  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:10.887797  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:10.887808  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:10 GMT
	I0731 11:15:10.887817  100669 round_trippers.go:580]     Audit-Id: dbf7c0bc-216c-48f2-872a-161c5ae6ca29
	I0731 11:15:10.887825  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:10.887836  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:10.887931  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:10.888297  100669 pod_ready.go:92] pod "kube-controller-manager-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:10.888314  100669 pod_ready.go:81] duration metric: took 5.943991ms waiting for pod "kube-controller-manager-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:10.888324  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29fgt" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:11.058693  100669 request.go:628] Waited for 170.318615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29fgt
	I0731 11:15:11.058764  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29fgt
	I0731 11:15:11.058778  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:11.058786  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:11.058797  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:11.061124  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:11.061140  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:11.061147  100669 round_trippers.go:580]     Audit-Id: 243ba2a1-30fe-4de8-b8b5-4192b720047a
	I0731 11:15:11.061152  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:11.061158  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:11.061165  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:11.061170  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:11.061176  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:11 GMT
	I0731 11:15:11.061260  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-29fgt","generateName":"kube-proxy-","namespace":"kube-system","uid":"70cacbca-af79-499e-bf4f-79afec890159","resourceVersion":"468","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc6c1bb2-2508-44c2-864c-44710ecfc28b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc6c1bb2-2508-44c2-864c-44710ecfc28b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0731 11:15:11.258996  100669 request.go:628] Waited for 197.354517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:11.259055  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026-m02
	I0731 11:15:11.259059  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:11.259068  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:11.259074  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:11.261488  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:11.261508  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:11.261515  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:11.261521  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:11.261527  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:11.261548  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:11 GMT
	I0731 11:15:11.261557  100669 round_trippers.go:580]     Audit-Id: 434ea2da-fc84-420c-923d-23837baeec93
	I0731 11:15:11.261568  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:11.261681  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026-m02","uid":"03394f97-517e-4ac3-a12a-9f2a0185cdc6","resourceVersion":"474","creationTimestamp":"2023-07-31T11:15:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:15:08Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5176 chars]
	I0731 11:15:11.261973  100669 pod_ready.go:92] pod "kube-proxy-29fgt" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:11.261990  100669 pod_ready.go:81] duration metric: took 373.661393ms waiting for pod "kube-proxy-29fgt" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:11.262000  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f64nn" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:11.458340  100669 request.go:628] Waited for 196.279024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f64nn
	I0731 11:15:11.458397  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f64nn
	I0731 11:15:11.458417  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:11.458429  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:11.458440  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:11.461129  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:11.461155  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:11.461165  100669 round_trippers.go:580]     Audit-Id: af0de097-37a1-4194-84f2-548bacd656e7
	I0731 11:15:11.461174  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:11.461183  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:11.461190  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:11.461197  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:11.461203  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:11 GMT
	I0731 11:15:11.461299  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f64nn","generateName":"kube-proxy-","namespace":"kube-system","uid":"c18aed2c-dab2-4dff-b288-2e39582688bb","resourceVersion":"384","creationTimestamp":"2023-07-31T11:14:21Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bc6c1bb2-2508-44c2-864c-44710ecfc28b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc6c1bb2-2508-44c2-864c-44710ecfc28b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0731 11:15:11.659079  100669 request.go:628] Waited for 197.374382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:11.659151  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:11.659160  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:11.659171  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:11.659184  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:11.661625  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:11.661643  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:11.661650  100669 round_trippers.go:580]     Audit-Id: cf52bcbc-a157-4f8f-9895-38dd0d8ed758
	I0731 11:15:11.661656  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:11.661661  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:11.661666  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:11.661672  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:11.661677  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:11 GMT
	I0731 11:15:11.661796  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:11.662100  100669 pod_ready.go:92] pod "kube-proxy-f64nn" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:11.662112  100669 pod_ready.go:81] duration metric: took 400.107023ms waiting for pod "kube-proxy-f64nn" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:11.662120  100669 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:11.858548  100669 request.go:628] Waited for 196.368274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-249026
	I0731 11:15:11.858632  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-249026
	I0731 11:15:11.858644  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:11.858652  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:11.858659  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:11.861678  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:11.861839  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:11.861856  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:11.861868  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:11.861884  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:11 GMT
	I0731 11:15:11.861893  100669 round_trippers.go:580]     Audit-Id: 42c56b25-3729-42ea-b4a5-8b4f521f3cfc
	I0731 11:15:11.861905  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:11.861917  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:11.862048  100669 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-249026","namespace":"kube-system","uid":"5e54a5cf-023c-4ed1-a1df-e67e9b4ca1f1","resourceVersion":"391","creationTimestamp":"2023-07-31T11:14:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"32a0babedab845278fd2b3f9ddf28116","kubernetes.io/config.mirror":"32a0babedab845278fd2b3f9ddf28116","kubernetes.io/config.seen":"2023-07-31T11:14:09.344028171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-31T11:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0731 11:15:12.058189  100669 request.go:628] Waited for 195.711933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:12.058252  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-249026
	I0731 11:15:12.058259  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:12.058267  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:12.058276  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:12.060558  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:12.060583  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:12.060590  100669 round_trippers.go:580]     Audit-Id: ad0d72cd-fad9-4232-9562-5b87ba3a4284
	I0731 11:15:12.060598  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:12.060607  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:12.060617  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:12.060627  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:12.060642  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:12 GMT
	I0731 11:15:12.060737  100669 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-31T11:14:06Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0731 11:15:12.061131  100669 pod_ready.go:92] pod "kube-scheduler-multinode-249026" in "kube-system" namespace has status "Ready":"True"
	I0731 11:15:12.061148  100669 pod_ready.go:81] duration metric: took 399.021461ms waiting for pod "kube-scheduler-multinode-249026" in "kube-system" namespace to be "Ready" ...
	I0731 11:15:12.061162  100669 pod_ready.go:38] duration metric: took 1.200263318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 11:15:12.061197  100669 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 11:15:12.061250  100669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:15:12.071629  100669 system_svc.go:56] duration metric: took 10.443465ms WaitForService to wait for kubelet.
	I0731 11:15:12.071648  100669 kubeadm.go:581] duration metric: took 3.234724727s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0731 11:15:12.071673  100669 node_conditions.go:102] verifying NodePressure condition ...
	I0731 11:15:12.259062  100669 request.go:628] Waited for 187.320917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0731 11:15:12.259120  100669 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0731 11:15:12.259125  100669 round_trippers.go:469] Request Headers:
	I0731 11:15:12.259133  100669 round_trippers.go:473]     Accept: application/json, */*
	I0731 11:15:12.259145  100669 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 11:15:12.261359  100669 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 11:15:12.261381  100669 round_trippers.go:577] Response Headers:
	I0731 11:15:12.261392  100669 round_trippers.go:580]     Audit-Id: 76905cab-7d21-424e-9987-365140d4f2e1
	I0731 11:15:12.261402  100669 round_trippers.go:580]     Cache-Control: no-cache, private
	I0731 11:15:12.261412  100669 round_trippers.go:580]     Content-Type: application/json
	I0731 11:15:12.261421  100669 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 480c0c3f-5202-45ce-9fe9-a5a786a173df
	I0731 11:15:12.261434  100669 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5ca8956e-bfad-4957-b829-8a5e1cd5ead7
	I0731 11:15:12.261446  100669 round_trippers.go:580]     Date: Mon, 31 Jul 2023 11:15:12 GMT
	I0731 11:15:12.261636  100669 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"476"},"items":[{"metadata":{"name":"multinode-249026","uid":"1e48f4b7-29ff-4d88-9cd6-940475faa126","resourceVersion":"401","creationTimestamp":"2023-07-31T11:14:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-249026","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7b0f4114385a1c2b88c73e894c2289f44aee35","minikube.k8s.io/name":"multinode-249026","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_31T11_14_10_0700","minikube.k8s.io/version":"v1.31.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I0731 11:15:12.262292  100669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0731 11:15:12.262312  100669 node_conditions.go:123] node cpu capacity is 8
	I0731 11:15:12.262325  100669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0731 11:15:12.262331  100669 node_conditions.go:123] node cpu capacity is 8
	I0731 11:15:12.262336  100669 node_conditions.go:105] duration metric: took 190.657805ms to run NodePressure ...
	I0731 11:15:12.262351  100669 start.go:228] waiting for startup goroutines ...
	I0731 11:15:12.262380  100669 start.go:242] writing updated cluster config ...
	I0731 11:15:12.262740  100669 ssh_runner.go:195] Run: rm -f paused
	I0731 11:15:12.308050  100669 start.go:596] kubectl: 1.27.4, cluster: 1.27.3 (minor skew: 0)
	I0731 11:15:12.310126  100669 out.go:177] * Done! kubectl is now configured to use "multinode-249026" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 31 11:14:54 multinode-249026 crio[958]: time="2023-07-31 11:14:54.033311156Z" level=info msg="Starting container: c618411a80ed767b1db15ab40e5a9d51af6f69fe98d1a196ef8c3bcf67d5c389" id=b43b7a85-320b-40b6-9837-18b7579cab46 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 11:14:54 multinode-249026 crio[958]: time="2023-07-31 11:14:54.034156427Z" level=info msg="Created container 342bd9a4e1dba7dbfef3439d03620f75a156e6a8abb773f276cdc6efeee06ab4: kube-system/coredns-5d78c9869d-z57mv/coredns" id=3b04bb64-76b5-4021-95d7-fab702e55393 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:14:54 multinode-249026 crio[958]: time="2023-07-31 11:14:54.034645534Z" level=info msg="Starting container: 342bd9a4e1dba7dbfef3439d03620f75a156e6a8abb773f276cdc6efeee06ab4" id=4527cca1-f1fb-4d04-abf6-b9f8313a3673 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 11:14:54 multinode-249026 crio[958]: time="2023-07-31 11:14:54.042735481Z" level=info msg="Started container" PID=2339 containerID=c618411a80ed767b1db15ab40e5a9d51af6f69fe98d1a196ef8c3bcf67d5c389 description=kube-system/storage-provisioner/storage-provisioner id=b43b7a85-320b-40b6-9837-18b7579cab46 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b98d432ec3cc54b2dd24838749c14ed5ac1fb5e403f6459a1b8eed151bf8d6e
	Jul 31 11:14:54 multinode-249026 crio[958]: time="2023-07-31 11:14:54.043603670Z" level=info msg="Started container" PID=2346 containerID=342bd9a4e1dba7dbfef3439d03620f75a156e6a8abb773f276cdc6efeee06ab4 description=kube-system/coredns-5d78c9869d-z57mv/coredns id=4527cca1-f1fb-4d04-abf6-b9f8313a3673 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9c053cda4b75813978f329239f3d953134331435a6ebadc189ad091de25ce4a
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.302595718Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-nhmrt/POD" id=b7ee4263-ac33-4562-b307-8bc19a3ac6bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.302674355Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.318338581Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-nhmrt Namespace:default ID:22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3 UID:8e7fe2e2-85e4-4b89-95d1-c37ca302dc32 NetNS:/var/run/netns/3f58940c-b821-4d3e-82c7-89480838a749 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.318381517Z" level=info msg="Adding pod default_busybox-67b7f59bb-nhmrt to CNI network \"kindnet\" (type=ptp)"
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.328649010Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-nhmrt Namespace:default ID:22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3 UID:8e7fe2e2-85e4-4b89-95d1-c37ca302dc32 NetNS:/var/run/netns/3f58940c-b821-4d3e-82c7-89480838a749 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.328833352Z" level=info msg="Checking pod default_busybox-67b7f59bb-nhmrt for CNI network kindnet (type=ptp)"
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.373269982Z" level=info msg="Ran pod sandbox 22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3 with infra container: default/busybox-67b7f59bb-nhmrt/POD" id=b7ee4263-ac33-4562-b307-8bc19a3ac6bb name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.374249037Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=19cfbb2c-f73c-4c7c-ae8d-64310c56638a name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.374430155Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=19cfbb2c-f73c-4c7c-ae8d-64310c56638a name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.375138318Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=ca3ad129-a0dc-4f08-9a1a-13c281b08218 name=/runtime.v1.ImageService/PullImage
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.380344115Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 31 11:15:13 multinode-249026 crio[958]: time="2023-07-31 11:15:13.627332576Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.123833260Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=ca3ad129-a0dc-4f08-9a1a-13c281b08218 name=/runtime.v1.ImageService/PullImage
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.124903512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a1188cf5-834f-4a5f-88ae-cd7f2ab92735 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.125555900Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a1188cf5-834f-4a5f-88ae-cd7f2ab92735 name=/runtime.v1.ImageService/ImageStatus
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.127269702Z" level=info msg="Creating container: default/busybox-67b7f59bb-nhmrt/busybox" id=170c13cd-93c8-4f87-82d2-031b0589ea9e name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.127420332Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.193243939Z" level=info msg="Created container df173aacfdb08b397d8704299e42386e41cf8e90f3e4b05711a77d13fae5be7f: default/busybox-67b7f59bb-nhmrt/busybox" id=170c13cd-93c8-4f87-82d2-031b0589ea9e name=/runtime.v1.RuntimeService/CreateContainer
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.193873736Z" level=info msg="Starting container: df173aacfdb08b397d8704299e42386e41cf8e90f3e4b05711a77d13fae5be7f" id=bbe1fda7-65c6-430b-a0c9-170aa6fba022 name=/runtime.v1.RuntimeService/StartContainer
	Jul 31 11:15:14 multinode-249026 crio[958]: time="2023-07-31 11:15:14.202257700Z" level=info msg="Started container" PID=2510 containerID=df173aacfdb08b397d8704299e42386e41cf8e90f3e4b05711a77d13fae5be7f description=default/busybox-67b7f59bb-nhmrt/busybox id=bbe1fda7-65c6-430b-a0c9-170aa6fba022 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	df173aacfdb08       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   22328075724f6       busybox-67b7f59bb-nhmrt
	342bd9a4e1dba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      23 seconds ago       Running             coredns                   0                   b9c053cda4b75       coredns-5d78c9869d-z57mv
	c618411a80ed7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      23 seconds ago       Running             storage-provisioner       0                   8b98d432ec3cc       storage-provisioner
	f532881d1236e       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      55 seconds ago       Running             kube-proxy                0                   101de7c18f9e2       kube-proxy-f64nn
	4b0511bf549e5       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      55 seconds ago       Running             kindnet-cni               0                   a8f16f4a4edd6       kindnet-pgkb6
	f74bae863be8f       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   f1a35e5674ec3       kube-apiserver-multinode-249026
	13187f578778f       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   6ded3ea9db1c3       kube-scheduler-multinode-249026
	31ede3f67a088       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   6dca8df8a0dae       kube-controller-manager-multinode-249026
	e6fa1a33adecf       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   50000fa859c23       etcd-multinode-249026
	
	* 
	* ==> coredns [342bd9a4e1dba7dbfef3439d03620f75a156e6a8abb773f276cdc6efeee06ab4] <==
	* [INFO] 10.244.1.2:59064 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132002s
	[INFO] 10.244.0.3:34216 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100149s
	[INFO] 10.244.0.3:41254 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001437089s
	[INFO] 10.244.0.3:42747 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085932s
	[INFO] 10.244.0.3:51990 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056445s
	[INFO] 10.244.0.3:42618 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000959641s
	[INFO] 10.244.0.3:50098 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092514s
	[INFO] 10.244.0.3:53478 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006622s
	[INFO] 10.244.0.3:55398 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044686s
	[INFO] 10.244.1.2:55088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113205s
	[INFO] 10.244.1.2:60242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101006s
	[INFO] 10.244.1.2:47703 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006526s
	[INFO] 10.244.1.2:38940 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068764s
	[INFO] 10.244.0.3:57159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094553s
	[INFO] 10.244.0.3:36176 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081507s
	[INFO] 10.244.0.3:55186 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068716s
	[INFO] 10.244.0.3:52914 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046805s
	[INFO] 10.244.1.2:43592 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151264s
	[INFO] 10.244.1.2:41514 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111614s
	[INFO] 10.244.1.2:36072 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115328s
	[INFO] 10.244.1.2:38442 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000070392s
	[INFO] 10.244.0.3:55927 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104767s
	[INFO] 10.244.0.3:54749 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082008s
	[INFO] 10.244.0.3:44023 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000053768s
	[INFO] 10.244.0.3:45644 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068773s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-249026
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-249026
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0b7b0f4114385a1c2b88c73e894c2289f44aee35
	                    minikube.k8s.io/name=multinode-249026
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_31T11_14_10_0700
	                    minikube.k8s.io/version=v1.31.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:14:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-249026
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Jul 2023 11:15:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:14:53 +0000   Mon, 31 Jul 2023 11:14:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:14:53 +0000   Mon, 31 Jul 2023 11:14:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:14:53 +0000   Mon, 31 Jul 2023 11:14:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:14:53 +0000   Mon, 31 Jul 2023 11:14:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-249026
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b96d906bf2a540a3bbfd52d35bd411ee
	  System UUID:                1d148bd6-0402-4ca4-94ef-07f28ff65009
	  Boot ID:                    c4e7adf1-530e-4fca-8214-6daedbc0c53f
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-nhmrt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5d78c9869d-z57mv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     56s
	  kube-system                 etcd-multinode-249026                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-pgkb6                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      56s
	  kube-system                 kube-apiserver-multinode-249026             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-multinode-249026    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-f64nn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-multinode-249026             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 68s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s   kubelet          Node multinode-249026 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s   kubelet          Node multinode-249026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s   kubelet          Node multinode-249026 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node multinode-249026 event: Registered Node multinode-249026 in Controller
	  Normal  NodeReady                24s   kubelet          Node multinode-249026 status is now: NodeReady
	
	
	Name:               multinode-249026-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-249026-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Jul 2023 11:15:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-249026-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Jul 2023 11:15:10 +0000   Mon, 31 Jul 2023 11:15:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Jul 2023 11:15:10 +0000   Mon, 31 Jul 2023 11:15:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Jul 2023 11:15:10 +0000   Mon, 31 Jul 2023 11:15:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Jul 2023 11:15:10 +0000   Mon, 31 Jul 2023 11:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-249026-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 41801129419b40bc93f3499d3fbbc82d
	  System UUID:                31e30136-794b-4265-885e-6aaa2bc25149
	  Boot ID:                    c4e7adf1-530e-4fca-8214-6daedbc0c53f
	  Kernel Version:             5.15.0-1038-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-fvzbv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-ssf2s              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-29fgt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 7s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 11s)  kubelet          Node multinode-249026-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 11s)  kubelet          Node multinode-249026-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 11s)  kubelet          Node multinode-249026-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7s                kubelet          Node multinode-249026-m02 status is now: NodeReady
	  Normal  RegisteredNode           6s                node-controller  Node multinode-249026-m02 event: Registered Node multinode-249026-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.004927] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006598] FS-Cache: N-cookie d=00000000b387d585{9p.inode} n=00000000ef355d8f
	[  +0.007366] FS-Cache: N-key=[8] '92a00f0200000000'
	[ +13.163366] FS-Cache: Duplicate cookie detected
	[  +0.004759] FS-Cache: O-cookie c=00000011 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006792] FS-Cache: O-cookie d=0000000033120b35{9P.session} n=0000000051ffe9f3
	[  +0.007565] FS-Cache: O-key=[10] '34323935353934333438'
	[  +0.005373] FS-Cache: N-cookie c=00000012 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007967] FS-Cache: N-cookie d=0000000033120b35{9P.session} n=000000003510cfd5
	[  +0.008908] FS-Cache: N-key=[10] '34323935353934333438'
	[  +9.008019] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul31 11:06] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +1.023871] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +2.015801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +4.255584] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[  +8.191207] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[ +16.126424] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	[Jul31 11:07] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 22 7e c3 30 75 2c 9e a9 47 19 7e 8d 08 00
	
	* 
	* ==> etcd [e6fa1a33adecf025badcf1b4a304a81938335fd74cc1ebadb8a36fb57bc7355a] <==
	* {"level":"info","ts":"2023-07-31T11:14:03.737Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-31T11:14:03.737Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-31T11:14:03.737Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-31T11:14:03.738Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-31T11:14:03.738Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-31T11:14:04.457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-31T11:14:04.458Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-249026 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-31T11:14:04.458Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:14:04.458Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-31T11:14:04.458Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-31T11:14:04.459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-31T11:14:04.460Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-31T11:15:00.221Z","caller":"traceutil/trace.go:171","msg":"trace[728684664] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"119.302826ms","start":"2023-07-31T11:15:00.102Z","end":"2023-07-31T11:15:00.221Z","steps":["trace[728684664] 'process raft request'  (duration: 119.191012ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-31T11:15:00.870Z","caller":"traceutil/trace.go:171","msg":"trace[1459999848] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"122.339449ms","start":"2023-07-31T11:15:00.747Z","end":"2023-07-31T11:15:00.869Z","steps":["trace[1459999848] 'process raft request'  (duration: 122.23006ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:15:18 up 57 min,  0 users,  load average: 1.31, 1.24, 0.76
	Linux multinode-249026 5.15.0-1038-gcp #46~20.04.1-Ubuntu SMP Fri Jul 14 09:48:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [4b0511bf549e5fdbdb5b96fd61c3116f9b4f87e04cae972bbc3a3967e5c28ce4] <==
	* I0731 11:14:22.937265       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0731 11:14:22.937338       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0731 11:14:22.937526       1 main.go:116] setting mtu 1500 for CNI 
	I0731 11:14:22.937551       1 main.go:146] kindnetd IP family: "ipv4"
	I0731 11:14:22.937574       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0731 11:14:53.172099       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0731 11:14:53.180503       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 11:14:53.180530       1 main.go:227] handling current node
	I0731 11:15:03.191455       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 11:15:03.191479       1 main.go:227] handling current node
	I0731 11:15:13.203350       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0731 11:15:13.203379       1 main.go:227] handling current node
	I0731 11:15:13.203390       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0731 11:15:13.203395       1 main.go:250] Node multinode-249026-m02 has CIDR [10.244.1.0/24] 
	I0731 11:15:13.203561       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [f74bae863be8f53a453e7c3463ca4704e6669d3adae7cd2470e722f45dd73d1f] <==
	* I0731 11:14:06.233770       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0731 11:14:06.233851       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 11:14:06.233921       1 aggregator.go:152] initial CRD sync complete...
	I0731 11:14:06.233950       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 11:14:06.233977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 11:14:06.234011       1 cache.go:39] Caches are synced for autoregister controller
	I0731 11:14:06.246073       1 controller.go:624] quota admission added evaluator for: namespaces
	I0731 11:14:06.330821       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 11:14:06.339335       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 11:14:06.896931       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 11:14:07.114501       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 11:14:07.117751       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 11:14:07.117772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 11:14:07.478463       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 11:14:07.511039       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 11:14:07.649669       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 11:14:07.656212       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0731 11:14:07.657163       1 controller.go:624] quota admission added evaluator for: endpoints
	I0731 11:14:07.660830       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 11:14:08.134636       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0731 11:14:09.285163       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0731 11:14:09.296379       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 11:14:09.304149       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0731 11:14:21.744615       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0731 11:14:21.839201       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [31ede3f67a088d4ffab0b3a46ea548a00e3beb56fa6b55d0a2405d38c077caf0] <==
	* I0731 11:14:21.832315       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0731 11:14:21.839697       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 11:14:21.847756       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f64nn"
	I0731 11:14:21.849578       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pgkb6"
	I0731 11:14:21.876945       1 shared_informer.go:318] Caches are synced for resource quota
	I0731 11:14:21.885737       1 shared_informer.go:318] Caches are synced for attach detach
	I0731 11:14:21.933312       1 shared_informer.go:318] Caches are synced for HPA
	I0731 11:14:21.994954       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-jhp95"
	I0731 11:14:22.000755       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-z57mv"
	I0731 11:14:22.076660       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0731 11:14:22.095088       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-jhp95"
	I0731 11:14:22.330320       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 11:14:22.430578       1 shared_informer.go:318] Caches are synced for garbage collector
	I0731 11:14:22.430622       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0731 11:14:56.773264       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0731 11:15:08.056330       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-249026-m02\" does not exist"
	I0731 11:15:08.062385       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-249026-m02" podCIDRs=[10.244.1.0/24]
	I0731 11:15:08.065149       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-29fgt"
	I0731 11:15:08.066374       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ssf2s"
	W0731 11:15:10.601808       1 topologycache.go:232] Can't get CPU or zone information for multinode-249026-m02 node
	I0731 11:15:11.775131       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-249026-m02"
	I0731 11:15:11.775169       1 event.go:307] "Event occurred" object="multinode-249026-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-249026-m02 event: Registered Node multinode-249026-m02 in Controller"
	I0731 11:15:12.983397       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0731 11:15:12.990317       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-fvzbv"
	I0731 11:15:12.995110       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-nhmrt"
	
	* 
	* ==> kube-proxy [f532881d1236e2ded56d3272415cdfb4f32685cc6e2e8c424e3257e7d08586e8] <==
	* I0731 11:14:22.940806       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0731 11:14:22.940903       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0731 11:14:22.940944       1 server_others.go:554] "Using iptables proxy"
	I0731 11:14:22.964736       1 server_others.go:192] "Using iptables Proxier"
	I0731 11:14:22.964783       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0731 11:14:22.964795       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0731 11:14:22.964811       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0731 11:14:22.964845       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 11:14:23.030491       1 server.go:658] "Version info" version="v1.27.3"
	I0731 11:14:23.030658       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 11:14:23.031846       1 config.go:97] "Starting endpoint slice config controller"
	I0731 11:14:23.031949       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0731 11:14:23.031916       1 config.go:188] "Starting service config controller"
	I0731 11:14:23.032024       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0731 11:14:23.032222       1 config.go:315] "Starting node config controller"
	I0731 11:14:23.032307       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0731 11:14:23.132073       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0731 11:14:23.132143       1 shared_informer.go:318] Caches are synced for service config
	I0731 11:14:23.133410       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [13187f578778fa9040a90eb131ef18829033538ca50bd0d01637066fba2b5e76] <==
	* W0731 11:14:06.344367       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 11:14:06.344424       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 11:14:06.344577       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 11:14:06.344617       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 11:14:06.344609       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 11:14:06.344641       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 11:14:06.344686       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 11:14:06.344696       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:14:06.344727       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 11:14:06.344699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 11:14:06.344776       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 11:14:06.344788       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 11:14:06.344799       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 11:14:06.344817       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 11:14:06.344866       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 11:14:06.344880       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 11:14:07.168442       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 11:14:07.168470       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 11:14:07.253301       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 11:14:07.253338       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 11:14:07.333824       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 11:14:07.333867       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 11:14:07.395557       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 11:14:07.395611       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 11:14:09.234948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937530    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqvc\" (UniqueName: \"kubernetes.io/projected/c18aed2c-dab2-4dff-b288-2e39582688bb-kube-api-access-zgqvc\") pod \"kube-proxy-f64nn\" (UID: \"c18aed2c-dab2-4dff-b288-2e39582688bb\") " pod="kube-system/kube-proxy-f64nn"
	Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937661    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad194032-2c3b-484e-9c89-9b7bc72632b6-lib-modules\") pod \"kindnet-pgkb6\" (UID: \"ad194032-2c3b-484e-9c89-9b7bc72632b6\") " pod="kube-system/kindnet-pgkb6"
	Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937709    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c18aed2c-dab2-4dff-b288-2e39582688bb-kube-proxy\") pod \"kube-proxy-f64nn\" (UID: \"c18aed2c-dab2-4dff-b288-2e39582688bb\") " pod="kube-system/kube-proxy-f64nn"
	Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937767    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5kn\" (UniqueName: \"kubernetes.io/projected/ad194032-2c3b-484e-9c89-9b7bc72632b6-kube-api-access-jc5kn\") pod \"kindnet-pgkb6\" (UID: \"ad194032-2c3b-484e-9c89-9b7bc72632b6\") " pod="kube-system/kindnet-pgkb6"
	Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937882    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c18aed2c-dab2-4dff-b288-2e39582688bb-lib-modules\") pod \"kube-proxy-f64nn\" (UID: \"c18aed2c-dab2-4dff-b288-2e39582688bb\") " pod="kube-system/kube-proxy-f64nn"
	Jul 31 11:14:21 multinode-249026 kubelet[1592]: I0731 11:14:21.937928    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ad194032-2c3b-484e-9c89-9b7bc72632b6-cni-cfg\") pod \"kindnet-pgkb6\" (UID: \"ad194032-2c3b-484e-9c89-9b7bc72632b6\") " pod="kube-system/kindnet-pgkb6"
	Jul 31 11:14:22 multinode-249026 kubelet[1592]: W0731 11:14:22.260829    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio-101de7c18f9e2592d85161c09883a53827d125f1405f3e02e0c1a417cef71012 WatchSource:0}: Error finding container 101de7c18f9e2592d85161c09883a53827d125f1405f3e02e0c1a417cef71012: Status 404 returned error can't find the container with id 101de7c18f9e2592d85161c09883a53827d125f1405f3e02e0c1a417cef71012
	Jul 31 11:14:22 multinode-249026 kubelet[1592]: W0731 11:14:22.261100    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio-a8f16f4a4edd6c06b53d7c59b0719130d1f86a60cc7508ba198e8fba947d906c WatchSource:0}: Error finding container a8f16f4a4edd6c06b53d7c59b0719130d1f86a60cc7508ba198e8fba947d906c: Status 404 returned error can't find the container with id a8f16f4a4edd6c06b53d7c59b0719130d1f86a60cc7508ba198e8fba947d906c
	Jul 31 11:14:23 multinode-249026 kubelet[1592]: I0731 11:14:23.468249    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-pgkb6" podStartSLOduration=2.468203372 podCreationTimestamp="2023-07-31 11:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:14:23.467927204 +0000 UTC m=+14.206181252" watchObservedRunningTime="2023-07-31 11:14:23.468203372 +0000 UTC m=+14.206457419"
	Jul 31 11:14:29 multinode-249026 kubelet[1592]: I0731 11:14:29.365495    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f64nn" podStartSLOduration=8.365453253 podCreationTimestamp="2023-07-31 11:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:14:23.476764746 +0000 UTC m=+14.215018810" watchObservedRunningTime="2023-07-31 11:14:29.365453253 +0000 UTC m=+20.103707303"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.599677    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.620149    1592 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.621113    1592 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.757940    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10ff228c-5c0d-4012-8b2c-79ff8210e4e1-config-volume\") pod \"coredns-5d78c9869d-z57mv\" (UID: \"10ff228c-5c0d-4012-8b2c-79ff8210e4e1\") " pod="kube-system/coredns-5d78c9869d-z57mv"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.757990    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhd9d\" (UniqueName: \"kubernetes.io/projected/10ff228c-5c0d-4012-8b2c-79ff8210e4e1-kube-api-access-mhd9d\") pod \"coredns-5d78c9869d-z57mv\" (UID: \"10ff228c-5c0d-4012-8b2c-79ff8210e4e1\") " pod="kube-system/coredns-5d78c9869d-z57mv"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.758112    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7lh\" (UniqueName: \"kubernetes.io/projected/f702705a-8dec-4ac9-98fd-283e1f55614b-kube-api-access-bc7lh\") pod \"storage-provisioner\" (UID: \"f702705a-8dec-4ac9-98fd-283e1f55614b\") " pod="kube-system/storage-provisioner"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: I0731 11:14:53.758172    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f702705a-8dec-4ac9-98fd-283e1f55614b-tmp\") pod \"storage-provisioner\" (UID: \"f702705a-8dec-4ac9-98fd-283e1f55614b\") " pod="kube-system/storage-provisioner"
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: W0731 11:14:53.968748    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio-8b98d432ec3cc54b2dd24838749c14ed5ac1fb5e403f6459a1b8eed151bf8d6e WatchSource:0}: Error finding container 8b98d432ec3cc54b2dd24838749c14ed5ac1fb5e403f6459a1b8eed151bf8d6e: Status 404 returned error can't find the container with id 8b98d432ec3cc54b2dd24838749c14ed5ac1fb5e403f6459a1b8eed151bf8d6e
	Jul 31 11:14:53 multinode-249026 kubelet[1592]: W0731 11:14:53.969037    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio-b9c053cda4b75813978f329239f3d953134331435a6ebadc189ad091de25ce4a WatchSource:0}: Error finding container b9c053cda4b75813978f329239f3d953134331435a6ebadc189ad091de25ce4a: Status 404 returned error can't find the container with id b9c053cda4b75813978f329239f3d953134331435a6ebadc189ad091de25ce4a
	Jul 31 11:14:54 multinode-249026 kubelet[1592]: I0731 11:14:54.520438    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.520390688 podCreationTimestamp="2023-07-31 11:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:14:54.520013831 +0000 UTC m=+45.258267876" watchObservedRunningTime="2023-07-31 11:14:54.520390688 +0000 UTC m=+45.258644733"
	Jul 31 11:14:54 multinode-249026 kubelet[1592]: I0731 11:14:54.529016    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-z57mv" podStartSLOduration=33.528974924 podCreationTimestamp="2023-07-31 11:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-31 11:14:54.528793218 +0000 UTC m=+45.267047262" watchObservedRunningTime="2023-07-31 11:14:54.528974924 +0000 UTC m=+45.267228969"
	Jul 31 11:15:13 multinode-249026 kubelet[1592]: I0731 11:15:13.000302    1592 topology_manager.go:212] "Topology Admit Handler"
	Jul 31 11:15:13 multinode-249026 kubelet[1592]: I0731 11:15:13.161350    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb9t\" (UniqueName: \"kubernetes.io/projected/8e7fe2e2-85e4-4b89-95d1-c37ca302dc32-kube-api-access-jkb9t\") pod \"busybox-67b7f59bb-nhmrt\" (UID: \"8e7fe2e2-85e4-4b89-95d1-c37ca302dc32\") " pod="default/busybox-67b7f59bb-nhmrt"
	Jul 31 11:15:13 multinode-249026 kubelet[1592]: W0731 11:15:13.368575    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio-22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3 WatchSource:0}: Error finding container 22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3: Status 404 returned error can't find the container with id 22328075724f659d506730eeb32a59dbbf633f2b9ae39d04d8eea2b9e06c74e3
	Jul 31 11:15:14 multinode-249026 kubelet[1592]: I0731 11:15:14.559448    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-nhmrt" podStartSLOduration=1.809685645 podCreationTimestamp="2023-07-31 11:15:12 +0000 UTC" firstStartedPulling="2023-07-31 11:15:13.374602577 +0000 UTC m=+64.112856614" lastFinishedPulling="2023-07-31 11:15:14.124324415 +0000 UTC m=+64.862578453" observedRunningTime="2023-07-31 11:15:14.559154482 +0000 UTC m=+65.297408529" watchObservedRunningTime="2023-07-31 11:15:14.559407484 +0000 UTC m=+65.297661526"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-249026 -n multinode-249026
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-249026 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.2086436632.exe start -p running-upgrade-285385 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0731 11:25:09.193952   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.2086436632.exe start -p running-upgrade-285385 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.267953162s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-285385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-285385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.681223214s)

                                                
                                                
-- stdout --
	* [running-upgrade-285385] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-285385 in cluster running-upgrade-285385
	* Pulling base image ...
	* Updating the running docker "running-upgrade-285385" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:26:11.913994  184033 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:26:11.914135  184033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:26:11.914145  184033 out.go:309] Setting ErrFile to fd 2...
	I0731 11:26:11.914152  184033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:26:11.914388  184033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:26:11.914943  184033 out.go:303] Setting JSON to false
	I0731 11:26:11.916728  184033 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4123,"bootTime":1690798649,"procs":842,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:26:11.916808  184033 start.go:138] virtualization: kvm guest
	I0731 11:26:11.919162  184033 out.go:177] * [running-upgrade-285385] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:26:11.921167  184033 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:26:11.921221  184033 notify.go:220] Checking for updates...
	I0731 11:26:11.922617  184033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:26:11.924092  184033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:26:11.925653  184033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:26:11.927156  184033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:26:11.928623  184033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:26:11.930552  184033 config.go:182] Loaded profile config "running-upgrade-285385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0731 11:26:11.930591  184033 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:26:11.932607  184033 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 11:26:11.935216  184033 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:26:11.961374  184033 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:26:11.961481  184033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:26:12.033767  184033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:83 SystemTime:2023-07-31 11:26:12.024469629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:26:12.033907  184033 docker.go:294] overlay module found
	I0731 11:26:12.036707  184033 out.go:177] * Using the docker driver based on existing profile
	I0731 11:26:12.038094  184033 start.go:298] selected driver: docker
	I0731 11:26:12.038109  184033 start.go:898] validating driver "docker" against &{Name:running-upgrade-285385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-285385 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:26:12.038217  184033 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:26:12.039290  184033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:26:12.105163  184033 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:83 SystemTime:2023-07-31 11:26:12.095558412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:26:12.105570  184033 cni.go:84] Creating CNI manager for ""
	I0731 11:26:12.105609  184033 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0731 11:26:12.105619  184033 start_flags.go:319] config:
	{Name:running-upgrade-285385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-285385 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:26:12.107682  184033 out.go:177] * Starting control plane node running-upgrade-285385 in cluster running-upgrade-285385
	I0731 11:26:12.108949  184033 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:26:12.110344  184033 out.go:177] * Pulling base image ...
	I0731 11:26:12.111752  184033 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0731 11:26:12.111783  184033 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:26:12.129132  184033 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:26:12.129155  184033 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0731 11:26:12.140200  184033 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0731 11:26:12.140344  184033 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/running-upgrade-285385/config.json ...
	I0731 11:26:12.140392  184033 cache.go:107] acquiring lock: {Name:mkf396591a3453fa520ffad7828607c774845845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140465  184033 cache.go:107] acquiring lock: {Name:mk400a620f7582b5f416b10b77e40f0aadf8fa1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140528  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0731 11:26:12.140542  184033 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 160.031µs
	I0731 11:26:12.140560  184033 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0731 11:26:12.140543  184033 cache.go:107] acquiring lock: {Name:mkbc86b240b1320f02dbe4862cc092e51450b8b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140544  184033 cache.go:107] acquiring lock: {Name:mk54d597062a2fb5da9c816ede230ff966328b66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140595  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0731 11:26:12.140604  184033 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:26:12.140584  184033 cache.go:107] acquiring lock: {Name:mka7111565254523b27aca90ed40dc4261478bfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140608  184033 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 68.232µs
	I0731 11:26:12.140617  184033 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0731 11:26:12.140530  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0731 11:26:12.140629  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0731 11:26:12.140631  184033 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 172.234µs
	I0731 11:26:12.140603  184033 cache.go:107] acquiring lock: {Name:mk23ceb90033920dc20a6ecd3ea153645477716e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140641  184033 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 124.673µs
	I0731 11:26:12.140651  184033 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0731 11:26:12.140391  184033 cache.go:107] acquiring lock: {Name:mkb3a353a96ddfef965bcf32589ca7f2c5f932cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140653  184033 cache.go:107] acquiring lock: {Name:mkcaf0220ef5a639038afae815dd7a330cef3dec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140640  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0731 11:26:12.140705  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 11:26:12.140709  184033 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 112.678µs
	I0731 11:26:12.140725  184033 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 336.806µs
	I0731 11:26:12.140739  184033 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0731 11:26:12.140740  184033 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 11:26:12.140754  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0731 11:26:12.140756  184033 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0731 11:26:12.140768  184033 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 202.218µs
	I0731 11:26:12.140773  184033 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 121.849µs
	I0731 11:26:12.140778  184033 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0731 11:26:12.140783  184033 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0731 11:26:12.140636  184033 start.go:365] acquiring machines lock for running-upgrade-285385: {Name:mk11fdc528b769cb5dd2691599260528fbd3dfbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:12.140844  184033 start.go:369] acquired machines lock for "running-upgrade-285385" in 48.41µs
	I0731 11:26:12.140864  184033 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:26:12.140869  184033 fix.go:54] fixHost starting: m01
	I0731 11:26:12.140640  184033 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0731 11:26:12.140927  184033 cache.go:87] Successfully saved all images to host disk.
	I0731 11:26:12.141121  184033 cli_runner.go:164] Run: docker container inspect running-upgrade-285385 --format={{.State.Status}}
	I0731 11:26:12.160722  184033 fix.go:102] recreateIfNeeded on running-upgrade-285385: state=Running err=<nil>
	W0731 11:26:12.160755  184033 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 11:26:12.163567  184033 out.go:177] * Updating the running docker "running-upgrade-285385" container ...
	I0731 11:26:12.165042  184033 machine.go:88] provisioning docker machine ...
	I0731 11:26:12.165081  184033 ubuntu.go:169] provisioning hostname "running-upgrade-285385"
	I0731 11:26:12.165146  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:12.184786  184033 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:12.185279  184033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0731 11:26:12.185301  184033 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-285385 && echo "running-upgrade-285385" | sudo tee /etc/hostname
	I0731 11:26:12.310985  184033 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-285385
	
	I0731 11:26:12.311064  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:12.328629  184033 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:12.329042  184033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0731 11:26:12.329061  184033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-285385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-285385/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-285385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:26:12.439582  184033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:26:12.439624  184033 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 11:26:12.439648  184033 ubuntu.go:177] setting up certificates
	I0731 11:26:12.439656  184033 provision.go:83] configureAuth start
	I0731 11:26:12.439720  184033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-285385
	I0731 11:26:12.460934  184033 provision.go:138] copyHostCerts
	I0731 11:26:12.460998  184033 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem, removing ...
	I0731 11:26:12.461016  184033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:26:12.461098  184033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 11:26:12.461234  184033 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem, removing ...
	I0731 11:26:12.461249  184033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:26:12.461286  184033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 11:26:12.461369  184033 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem, removing ...
	I0731 11:26:12.461382  184033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:26:12.461422  184033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 11:26:12.461492  184033 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-285385 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-285385]
	I0731 11:26:12.601325  184033 provision.go:172] copyRemoteCerts
	I0731 11:26:12.601386  184033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:26:12.601427  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:12.621390  184033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/running-upgrade-285385/id_rsa Username:docker}
	I0731 11:26:12.708094  184033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:26:12.726899  184033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:26:12.745263  184033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 11:26:12.762674  184033 provision.go:86] duration metric: configureAuth took 323.006247ms
	I0731 11:26:12.762702  184033 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:26:12.762888  184033 config.go:182] Loaded profile config "running-upgrade-285385": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0731 11:26:12.762995  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:12.782467  184033 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:12.782844  184033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0731 11:26:12.782860  184033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:26:13.234444  184033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:26:13.234475  184033 machine.go:91] provisioned docker machine in 1.069405957s
	I0731 11:26:13.234487  184033 start.go:300] post-start starting for "running-upgrade-285385" (driver="docker")
	I0731 11:26:13.234500  184033 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:26:13.234557  184033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:26:13.234591  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:13.273490  184033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/running-upgrade-285385/id_rsa Username:docker}
	I0731 11:26:13.368333  184033 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:26:13.372432  184033 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:26:13.372477  184033 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:26:13.372492  184033 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:26:13.372506  184033 info.go:137] Remote host: Ubuntu 19.10
	I0731 11:26:13.372517  184033 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 11:26:13.372583  184033 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 11:26:13.372676  184033 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> 156462.pem in /etc/ssl/certs
	I0731 11:26:13.372793  184033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:26:13.380241  184033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:26:13.397305  184033 start.go:303] post-start completed in 162.797318ms
	I0731 11:26:13.397379  184033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:26:13.397422  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:13.416118  184033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/running-upgrade-285385/id_rsa Username:docker}
	I0731 11:26:13.500723  184033 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:26:13.505254  184033 fix.go:56] fixHost completed within 1.364379202s
	I0731 11:26:13.505316  184033 start.go:83] releasing machines lock for "running-upgrade-285385", held for 1.364459609s
	I0731 11:26:13.505394  184033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-285385
	I0731 11:26:13.526524  184033 ssh_runner.go:195] Run: cat /version.json
	I0731 11:26:13.526573  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:13.526668  184033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:26:13.526745  184033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-285385
	I0731 11:26:13.561483  184033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/running-upgrade-285385/id_rsa Username:docker}
	I0731 11:26:13.569162  184033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/running-upgrade-285385/id_rsa Username:docker}
	W0731 11:26:13.673637  184033 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 11:26:13.673716  184033 ssh_runner.go:195] Run: systemctl --version
	I0731 11:26:13.802114  184033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:26:13.870962  184033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:26:13.875776  184033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:26:13.983390  184033 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:26:13.983478  184033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:26:14.157264  184033 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 11:26:14.157288  184033 start.go:466] detecting cgroup driver to use...
	I0731 11:26:14.157324  184033 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:26:14.157373  184033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:26:14.179561  184033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:26:14.188646  184033 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:26:14.188695  184033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:26:14.198197  184033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:26:14.208551  184033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 11:26:14.219275  184033 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 11:26:14.219343  184033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:26:14.305539  184033 docker.go:212] disabling docker service ...
	I0731 11:26:14.305606  184033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:26:14.316045  184033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:26:14.325041  184033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:26:14.411782  184033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:26:14.498616  184033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:26:14.507464  184033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:26:14.519756  184033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 11:26:14.519822  184033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:26:14.532938  184033 out.go:177] 
	W0731 11:26:14.534686  184033 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0731 11:26:14.534710  184033 out.go:239] * 
	* 
	W0731 11:26:14.535781  184033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:26:14.537511  184033 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-285385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-31 11:26:14.556672689 +0000 UTC m=+1888.426564240
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-285385
helpers_test.go:235: (dbg) docker inspect running-upgrade-285385:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4",
	        "Created": "2023-07-31T11:25:07.20485499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 167998,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-31T11:25:08.019940394Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4/hosts",
	        "LogPath": "/var/lib/docker/containers/8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4/8bb68ef6651015618fafa849659022215dfe2300c325b772f51698ccb302c0f4-json.log",
	        "Name": "/running-upgrade-285385",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-285385:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5d5ea7d17d04383a85f53f51fb94fea9888a615602fbba4450126731e9289010-init/diff:/var/lib/docker/overlay2/a27c8630ccb95d3db940642a8a08d2ff767580cebcef2ef1303818b74c246e83/diff:/var/lib/docker/overlay2/770363fa22db5987cdde4d30cb6af6b2207a2f0ae5be87adb4325447fae9cb30/diff:/var/lib/docker/overlay2/df2a131388fd65781537dd1aacc3e25132c6cc777fe1fe4a3cb2132490b64e54/diff:/var/lib/docker/overlay2/e81fd7d390b2514a4cf058e61ebcd969b75adc4570649447aa4693a80d428c8e/diff:/var/lib/docker/overlay2/c08d964312c95794cac76d4f7e761980e69ea768e85eedd4fdb7a0b403797800/diff:/var/lib/docker/overlay2/ebab87b912e3d6da2dfd308a7640d416bd22f627d06627af2c660418cd95e7d6/diff:/var/lib/docker/overlay2/b59934dea7c396a906d02e4bbdd6d3d1e0815dd0b7e3d88cd89564a477119b77/diff:/var/lib/docker/overlay2/bd66f2850bf312255dc4f077228319216630993f2d33d93b1686d353c93d0fc5/diff:/var/lib/docker/overlay2/f54b3797fe49b8e8b00ad5a201c5bb52ca54ec10031f421d8f57f8dedf2ed73b/diff:/var/lib/docker/overlay2/790a71
e76cf8bfc1f9d5ac50f857af72d5579d0b08bbebc23af83fb55e16d484/diff:/var/lib/docker/overlay2/a071ac94eab8ab4ee574e5fac8e3f82e41a4a90095e4b6069fcf858540bdc56c/diff:/var/lib/docker/overlay2/add15aea29b21e8a177b6b001c68b1b41771ae311f9bf9f2f5f911f5244083b4/diff:/var/lib/docker/overlay2/ce35670cb001f8347329ff4ddb7e5cb974eb69b2ad133ec12842bbafda102e08/diff:/var/lib/docker/overlay2/2710286f102c54f534e8ccaa85157a71ccefc4848b91c1f6ee452e2720f986a5/diff:/var/lib/docker/overlay2/ca3e7c95b4827fc1f5b6780bffe9222433020f1770bf8bfcf026da589adb642a/diff:/var/lib/docker/overlay2/09f4b17befc712459a85af5ce303ed403eaf8edf2bf730d3c2a3bc75646c8417/diff:/var/lib/docker/overlay2/b2c949b91fa5cebd644d397b75c615b624df5640234cfb81a11808601d8cbb18/diff:/var/lib/docker/overlay2/517334745f150c77a53f23b77e3cab52d5b70619fa6f4025ae9bfffe3a9195c8/diff:/var/lib/docker/overlay2/a2fd031cb780fae6c264e1f5d1ab4cc70ccc77e7bdc13d8330e982ba92813bce/diff:/var/lib/docker/overlay2/236ccf9d15774ecc9ce6150f3bf040520446d825e4cda50cf4502bd094ebb165/diff:/var/lib/d
ocker/overlay2/05e5e7a4f348d7450fd9095d44147a0c762b894ca77f09eaf78bc306d73e77df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d5ea7d17d04383a85f53f51fb94fea9888a615602fbba4450126731e9289010/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d5ea7d17d04383a85f53f51fb94fea9888a615602fbba4450126731e9289010/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d5ea7d17d04383a85f53f51fb94fea9888a615602fbba4450126731e9289010/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-285385",
	                "Source": "/var/lib/docker/volumes/running-upgrade-285385/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-285385",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-285385",
	                "name.minikube.sigs.k8s.io": "running-upgrade-285385",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1dd47e17db32a439e3c4a397b224a95c8c317a60a04baffe6fea58ffae90a377",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32951"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32950"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32949"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1dd47e17db32",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "0ec952fc274201a8c90156cefbdf9138adb65fce60923d30199566375d51553f",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "436cf342315fc3b72531e7a015644806e4ada2a441ec11abcf55def89054fa69",
	                    "EndpointID": "0ec952fc274201a8c90156cefbdf9138adb65fce60923d30199566375d51553f",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-285385 -n running-upgrade-285385
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-285385 -n running-upgrade-285385: exit status 4 (312.211931ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:26:14.862033  185156 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-285385" does not appear in /home/jenkins/minikube-integration/16968-8855/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-285385" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-285385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-285385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-285385: (2.204070789s)
--- FAIL: TestRunningBinaryUpgrade (71.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.4154153036.exe start -p stopped-upgrade-841889 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.4154153036.exe start -p stopped-upgrade-841889 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m11.671823357s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.4154153036.exe -p stopped-upgrade-841889 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.4154153036.exe -p stopped-upgrade-841889 stop: (2.034240946s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-841889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-841889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.687180148s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-841889] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-841889 in cluster stopped-upgrade-841889
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-841889" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:26:05.500163  182657 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:26:05.500316  182657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:26:05.500326  182657 out.go:309] Setting ErrFile to fd 2...
	I0731 11:26:05.500333  182657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:26:05.500551  182657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:26:05.501117  182657 out.go:303] Setting JSON to false
	I0731 11:26:05.502843  182657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4117,"bootTime":1690798649,"procs":832,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:26:05.502904  182657 start.go:138] virtualization: kvm guest
	I0731 11:26:05.505305  182657 out.go:177] * [stopped-upgrade-841889] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:26:05.507084  182657 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:26:05.508478  182657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:26:05.507100  182657 notify.go:220] Checking for updates...
	I0731 11:26:05.510152  182657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:26:05.511778  182657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:26:05.513292  182657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:26:05.514748  182657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:26:05.516566  182657 config.go:182] Loaded profile config "stopped-upgrade-841889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0731 11:26:05.516603  182657 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0731 11:26:05.519127  182657 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0731 11:26:05.520652  182657 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:26:05.544444  182657 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:26:05.544547  182657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:26:05.601677  182657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:97 SystemTime:2023-07-31 11:26:05.593063895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:26:05.601778  182657 docker.go:294] overlay module found
	I0731 11:26:05.603562  182657 out.go:177] * Using the docker driver based on existing profile
	I0731 11:26:05.605036  182657 start.go:298] selected driver: docker
	I0731 11:26:05.605050  182657 start.go:898] validating driver "docker" against &{Name:stopped-upgrade-841889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-841889 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:26:05.605145  182657 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:26:05.605874  182657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:26:05.661797  182657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:97 SystemTime:2023-07-31 11:26:05.652788056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:26:05.662151  182657 cni.go:84] Creating CNI manager for ""
	I0731 11:26:05.662170  182657 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0731 11:26:05.662181  182657 start_flags.go:319] config:
	{Name:stopped-upgrade-841889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-841889 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:26:05.664200  182657 out.go:177] * Starting control plane node stopped-upgrade-841889 in cluster stopped-upgrade-841889
	I0731 11:26:05.665942  182657 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 11:26:05.667390  182657 out.go:177] * Pulling base image ...
	I0731 11:26:05.668731  182657 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0731 11:26:05.668757  182657 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 11:26:05.686400  182657 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0731 11:26:05.686424  182657 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0731 11:26:05.696036  182657 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0731 11:26:05.696192  182657 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/stopped-upgrade-841889/config.json ...
	I0731 11:26:05.696391  182657 cache.go:107] acquiring lock: {Name:mk400a620f7582b5f416b10b77e40f0aadf8fa1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696374  182657 cache.go:107] acquiring lock: {Name:mkf396591a3453fa520ffad7828607c774845845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696443  182657 cache.go:107] acquiring lock: {Name:mkcaf0220ef5a639038afae815dd7a330cef3dec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696464  182657 cache.go:107] acquiring lock: {Name:mkbc86b240b1320f02dbe4862cc092e51450b8b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696484  182657 cache.go:195] Successfully downloaded all kic artifacts
	I0731 11:26:05.696489  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0731 11:26:05.696473  182657 cache.go:107] acquiring lock: {Name:mk54d597062a2fb5da9c816ede230ff966328b66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696499  182657 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 136.581µs
	I0731 11:26:05.696511  182657 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0731 11:26:05.696508  182657 start.go:365] acquiring machines lock for stopped-upgrade-841889: {Name:mk83cc4fa6e6f03fc8368a4cce38d28d7703705c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696499  182657 cache.go:107] acquiring lock: {Name:mka7111565254523b27aca90ed40dc4261478bfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696526  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0731 11:26:05.696503  182657 cache.go:107] acquiring lock: {Name:mk23ceb90033920dc20a6ecd3ea153645477716e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696578  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0731 11:26:05.696588  182657 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 151.161µs
	I0731 11:26:05.696597  182657 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0731 11:26:05.696367  182657 cache.go:107] acquiring lock: {Name:mkb3a353a96ddfef965bcf32589ca7f2c5f932cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:26:05.696569  182657 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 105.281µs
	I0731 11:26:05.696613  182657 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0731 11:26:05.696624  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0731 11:26:05.696626  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0731 11:26:05.696635  182657 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 253.922µs
	I0731 11:26:05.696647  182657 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0731 11:26:05.696638  182657 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 199.199µs
	I0731 11:26:05.696678  182657 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0731 11:26:05.696687  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0731 11:26:05.696697  182657 start.go:369] acquired machines lock for "stopped-upgrade-841889" in 175.452µs
	I0731 11:26:05.696689  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 11:26:05.696726  182657 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 366.124µs
	I0731 11:26:05.696744  182657 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 11:26:05.696712  182657 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:26:05.696758  182657 fix.go:54] fixHost starting: m01
	I0731 11:26:05.696689  182657 cache.go:115] /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0731 11:26:05.696817  182657 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 348.98µs
	I0731 11:26:05.696831  182657 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0731 11:26:05.696710  182657 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 217.204µs
	I0731 11:26:05.696862  182657 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0731 11:26:05.696872  182657 cache.go:87] Successfully saved all images to host disk.
	I0731 11:26:05.697046  182657 cli_runner.go:164] Run: docker container inspect stopped-upgrade-841889 --format={{.State.Status}}
	I0731 11:26:05.714853  182657 fix.go:102] recreateIfNeeded on stopped-upgrade-841889: state=Stopped err=<nil>
	W0731 11:26:05.714887  182657 fix.go:128] unexpected machine state, will restart: <nil>
	I0731 11:26:05.717080  182657 out.go:177] * Restarting existing docker container for "stopped-upgrade-841889" ...
	I0731 11:26:05.718686  182657 cli_runner.go:164] Run: docker start stopped-upgrade-841889
	I0731 11:26:05.997428  182657 cli_runner.go:164] Run: docker container inspect stopped-upgrade-841889 --format={{.State.Status}}
	I0731 11:26:06.014825  182657 kic.go:426] container "stopped-upgrade-841889" state is running.
	I0731 11:26:06.015312  182657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-841889
	I0731 11:26:06.034236  182657 profile.go:148] Saving config to /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/stopped-upgrade-841889/config.json ...
	I0731 11:26:06.034443  182657 machine.go:88] provisioning docker machine ...
	I0731 11:26:06.034466  182657 ubuntu.go:169] provisioning hostname "stopped-upgrade-841889"
	I0731 11:26:06.034502  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:06.051501  182657 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:06.052239  182657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32964 <nil> <nil>}
	I0731 11:26:06.052263  182657 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-841889 && echo "stopped-upgrade-841889" | sudo tee /etc/hostname
	I0731 11:26:06.052956  182657 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59126->127.0.0.1:32964: read: connection reset by peer
	I0731 11:26:09.167904  182657 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-841889
	
	I0731 11:26:09.167989  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:09.184900  182657 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:09.185302  182657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32964 <nil> <nil>}
	I0731 11:26:09.185323  182657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-841889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-841889/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-841889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 11:26:09.291938  182657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 11:26:09.291969  182657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16968-8855/.minikube CaCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16968-8855/.minikube}
	I0731 11:26:09.291992  182657 ubuntu.go:177] setting up certificates
	I0731 11:26:09.292002  182657 provision.go:83] configureAuth start
	I0731 11:26:09.292055  182657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-841889
	I0731 11:26:09.312022  182657 provision.go:138] copyHostCerts
	I0731 11:26:09.312091  182657 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem, removing ...
	I0731 11:26:09.312109  182657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem
	I0731 11:26:09.312192  182657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/ca.pem (1082 bytes)
	I0731 11:26:09.312328  182657 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem, removing ...
	I0731 11:26:09.312348  182657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem
	I0731 11:26:09.312386  182657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/cert.pem (1123 bytes)
	I0731 11:26:09.312461  182657 exec_runner.go:144] found /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem, removing ...
	I0731 11:26:09.312471  182657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem
	I0731 11:26:09.312502  182657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16968-8855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16968-8855/.minikube/key.pem (1675 bytes)
	I0731 11:26:09.312578  182657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-841889 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-841889]
	I0731 11:26:09.593133  182657 provision.go:172] copyRemoteCerts
	I0731 11:26:09.593225  182657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 11:26:09.593265  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:09.614547  182657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32964 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/stopped-upgrade-841889/id_rsa Username:docker}
	I0731 11:26:09.704308  182657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 11:26:09.724767  182657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 11:26:09.745267  182657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 11:26:09.787690  182657 provision.go:86] duration metric: configureAuth took 495.674196ms
	I0731 11:26:09.787716  182657 ubuntu.go:193] setting minikube options for container-runtime
	I0731 11:26:09.787866  182657 config.go:182] Loaded profile config "stopped-upgrade-841889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0731 11:26:09.787976  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:09.809781  182657 main.go:141] libmachine: Using SSH client type: native
	I0731 11:26:09.810421  182657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eb00] 0x811ba0 <nil>  [] 0s} 127.0.0.1 32964 <nil> <nil>}
	I0731 11:26:09.810452  182657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 11:26:11.030981  182657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 11:26:11.031004  182657 machine.go:91] provisioned docker machine in 4.996548986s
	I0731 11:26:11.031012  182657 start.go:300] post-start starting for "stopped-upgrade-841889" (driver="docker")
	I0731 11:26:11.031021  182657 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 11:26:11.031071  182657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 11:26:11.031112  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:11.052807  182657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32964 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/stopped-upgrade-841889/id_rsa Username:docker}
	I0731 11:26:11.139965  182657 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 11:26:11.142861  182657 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0731 11:26:11.142894  182657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0731 11:26:11.142910  182657 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0731 11:26:11.142917  182657 info.go:137] Remote host: Ubuntu 19.10
	I0731 11:26:11.142929  182657 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/addons for local assets ...
	I0731 11:26:11.143001  182657 filesync.go:126] Scanning /home/jenkins/minikube-integration/16968-8855/.minikube/files for local assets ...
	I0731 11:26:11.143099  182657 filesync.go:149] local asset: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem -> 156462.pem in /etc/ssl/certs
	I0731 11:26:11.143236  182657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 11:26:11.150263  182657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/ssl/certs/156462.pem --> /etc/ssl/certs/156462.pem (1708 bytes)
	I0731 11:26:11.168396  182657 start.go:303] post-start completed in 137.368916ms
	I0731 11:26:11.168475  182657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:26:11.168530  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:11.188296  182657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32964 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/stopped-upgrade-841889/id_rsa Username:docker}
	I0731 11:26:11.296868  182657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 11:26:11.301789  182657 fix.go:56] fixHost completed within 5.605023326s
	I0731 11:26:11.301813  182657 start.go:83] releasing machines lock for "stopped-upgrade-841889", held for 5.605104346s
	I0731 11:26:11.301884  182657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-841889
	I0731 11:26:11.322998  182657 ssh_runner.go:195] Run: cat /version.json
	I0731 11:26:11.323053  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:11.323082  182657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 11:26:11.323160  182657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-841889
	I0731 11:26:11.343196  182657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32964 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/stopped-upgrade-841889/id_rsa Username:docker}
	I0731 11:26:11.343189  182657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32964 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/stopped-upgrade-841889/id_rsa Username:docker}
	W0731 11:26:11.649438  182657 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 11:26:11.649506  182657 ssh_runner.go:195] Run: systemctl --version
	I0731 11:26:11.653469  182657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 11:26:11.709758  182657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 11:26:11.714998  182657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:26:11.732143  182657 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0731 11:26:11.732241  182657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 11:26:11.773957  182657 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 11:26:11.773979  182657 start.go:466] detecting cgroup driver to use...
	I0731 11:26:11.774006  182657 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0731 11:26:11.774045  182657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 11:26:11.796634  182657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 11:26:11.805904  182657 docker.go:196] disabling cri-docker service (if available) ...
	I0731 11:26:11.805947  182657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 11:26:11.815459  182657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 11:26:11.824473  182657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0731 11:26:11.832969  182657 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0731 11:26:11.833026  182657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 11:26:11.916820  182657 docker.go:212] disabling docker service ...
	I0731 11:26:11.916877  182657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 11:26:11.926926  182657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 11:26:11.937185  182657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 11:26:12.008039  182657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 11:26:12.099170  182657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 11:26:12.114365  182657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 11:26:12.128547  182657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 11:26:12.128607  182657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 11:26:12.138061  182657 out.go:177] 
	W0731 11:26:12.139722  182657 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0731 11:26:12.139743  182657 out.go:239] * 
	* 
	W0731 11:26:12.140963  182657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:26:12.142786  182657 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-841889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (80.40s)

                                                
                                    

Test pass (274/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.18
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 53.47
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.19
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
18 TestDownloadOnlyKic 1.16
19 TestBinaryMirror 0.69
20 TestOffline 52.99
22 TestAddons/Setup 118.52
24 TestAddons/parallel/Registry 14.55
26 TestAddons/parallel/InspektorGadget 10.73
27 TestAddons/parallel/MetricsServer 5.82
28 TestAddons/parallel/HelmTiller 10.31
30 TestAddons/parallel/CSI 79.73
31 TestAddons/parallel/Headlamp 11.68
32 TestAddons/parallel/CloudSpanner 5.54
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 12.07
37 TestCertOptions 29.76
38 TestCertExpiration 236.59
40 TestForceSystemdFlag 24.79
41 TestForceSystemdEnv 42.3
43 TestKVMDriverInstallOrUpdate 1.92
47 TestErrorSpam/setup 21.27
48 TestErrorSpam/start 0.57
49 TestErrorSpam/status 0.84
50 TestErrorSpam/pause 1.45
51 TestErrorSpam/unpause 1.43
52 TestErrorSpam/stop 1.34
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 36.33
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.99
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.69
64 TestFunctional/serial/CacheCmd/cache/add_local 0.78
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 33.49
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.32
75 TestFunctional/serial/LogsFileCmd 1.32
76 TestFunctional/serial/InvalidService 3.93
78 TestFunctional/parallel/ConfigCmd 0.3
79 TestFunctional/parallel/DashboardCmd 8.01
80 TestFunctional/parallel/DryRun 0.38
81 TestFunctional/parallel/InternationalLanguage 0.16
82 TestFunctional/parallel/StatusCmd 1.03
86 TestFunctional/parallel/ServiceCmdConnect 10.51
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 28.71
90 TestFunctional/parallel/SSHCmd 0.59
91 TestFunctional/parallel/CpCmd 1.1
92 TestFunctional/parallel/MySQL 21.73
93 TestFunctional/parallel/FileSync 0.28
94 TestFunctional/parallel/CertSync 1.59
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
102 TestFunctional/parallel/License 0.14
103 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.32
109 TestFunctional/parallel/ServiceCmd/List 0.48
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
112 TestFunctional/parallel/ServiceCmd/Format 0.34
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/ServiceCmd/URL 0.35
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 1.07
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
126 TestFunctional/parallel/ImageCommands/ImageBuild 1.87
127 TestFunctional/parallel/ImageCommands/Setup 1.03
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
129 TestFunctional/parallel/ProfileCmd/profile_list 0.36
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.64
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
132 TestFunctional/parallel/MountCmd/any-port 6.76
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.58
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.38
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.2
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.18
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.15
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 66.84
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.27
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
157 TestJSONOutput/start/Command 66.42
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.65
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.57
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.7
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
182 TestKicCustomNetwork/create_custom_network 29.89
183 TestKicCustomNetwork/use_default_bridge_network 24.61
184 TestKicExistingNetwork 26.71
185 TestKicCustomSubnet 26.62
186 TestKicStaticIP 24.7
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 52.3
191 TestMountStart/serial/StartWithMountFirst 7.91
192 TestMountStart/serial/VerifyMountFirst 0.23
193 TestMountStart/serial/StartWithMountSecond 7.81
194 TestMountStart/serial/VerifyMountSecond 0.24
195 TestMountStart/serial/DeleteFirst 1.6
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.18
198 TestMountStart/serial/RestartStopped 7.11
199 TestMountStart/serial/VerifyMountPostStop 0.23
202 TestMultiNode/serial/FreshStart2Nodes 85.92
203 TestMultiNode/serial/DeployApp2Nodes 3.24
205 TestMultiNode/serial/AddNode 15.91
206 TestMultiNode/serial/ProfileList 0.26
207 TestMultiNode/serial/CopyFile 8.6
208 TestMultiNode/serial/StopNode 2.06
209 TestMultiNode/serial/StartAfterStop 10.73
210 TestMultiNode/serial/RestartKeepsNodes 113.02
211 TestMultiNode/serial/DeleteNode 4.63
212 TestMultiNode/serial/StopMultiNode 23.86
213 TestMultiNode/serial/RestartMultiNode 78.35
214 TestMultiNode/serial/ValidateNameConflict 23.39
219 TestPreload 125.5
221 TestScheduledStopUnix 97.58
224 TestInsufficientStorage 9.89
227 TestKubernetesUpgrade 361.01
228 TestMissingContainerUpgrade 174.49
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
231 TestNoKubernetes/serial/StartWithK8s 35.7
232 TestNoKubernetes/serial/StartWithStopK8s 7.6
233 TestNoKubernetes/serial/Start 10.09
234 TestStoppedBinaryUpgrade/Setup 0.46
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
237 TestNoKubernetes/serial/ProfileList 1.45
238 TestNoKubernetes/serial/Stop 1.28
239 TestNoKubernetes/serial/StartNoArgs 8.38
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.54
250 TestPause/serial/Start 43.46
251 TestPause/serial/SecondStartNoReconfiguration 27.4
259 TestNetworkPlugins/group/false 2.9
263 TestPause/serial/Pause 0.7
264 TestPause/serial/VerifyStatus 0.29
265 TestPause/serial/Unpause 0.68
266 TestPause/serial/PauseAgain 0.8
267 TestPause/serial/DeletePaused 3.89
268 TestPause/serial/VerifyDeletedResources 0.56
270 TestStartStop/group/old-k8s-version/serial/FirstStart 124.7
272 TestStartStop/group/no-preload/serial/FirstStart 51.59
273 TestStartStop/group/no-preload/serial/DeployApp 8.41
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
275 TestStartStop/group/no-preload/serial/Stop 11.9
276 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
277 TestStartStop/group/no-preload/serial/SecondStart 339.96
278 TestStartStop/group/old-k8s-version/serial/DeployApp 8.41
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
280 TestStartStop/group/old-k8s-version/serial/Stop 11.93
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
282 TestStartStop/group/old-k8s-version/serial/SecondStart 422.17
284 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.57
286 TestStartStop/group/newest-cni/serial/FirstStart 35.7
287 TestStartStop/group/newest-cni/serial/DeployApp 0
288 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
289 TestStartStop/group/newest-cni/serial/Stop 5.7
290 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
291 TestStartStop/group/newest-cni/serial/SecondStart 26.33
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.47
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.23
297 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
298 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
300 TestStartStop/group/newest-cni/serial/Pause 2.49
302 TestStartStop/group/embed-certs/serial/FirstStart 69.71
303 TestStartStop/group/embed-certs/serial/DeployApp 8.46
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
305 TestStartStop/group/embed-certs/serial/Stop 11.87
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
307 TestStartStop/group/embed-certs/serial/SecondStart 341.71
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
311 TestStartStop/group/no-preload/serial/Pause 2.63
312 TestNetworkPlugins/group/auto/Start 67.29
313 TestNetworkPlugins/group/auto/KubeletFlags 0.26
314 TestNetworkPlugins/group/auto/NetCatPod 10.29
315 TestNetworkPlugins/group/auto/DNS 0.16
316 TestNetworkPlugins/group/auto/Localhost 0.14
317 TestNetworkPlugins/group/auto/HairPin 0.14
318 TestNetworkPlugins/group/kindnet/Start 70.67
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
322 TestStartStop/group/old-k8s-version/serial/Pause 2.83
323 TestNetworkPlugins/group/calico/Start 63.83
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.12
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.78
328 TestNetworkPlugins/group/custom-flannel/Start 57.98
329 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
330 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
331 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
332 TestNetworkPlugins/group/kindnet/DNS 0.19
333 TestNetworkPlugins/group/kindnet/Localhost 0.15
334 TestNetworkPlugins/group/kindnet/HairPin 0.15
335 TestNetworkPlugins/group/calico/ControllerPod 5.03
336 TestNetworkPlugins/group/calico/KubeletFlags 0.27
337 TestNetworkPlugins/group/calico/NetCatPod 11.37
338 TestNetworkPlugins/group/enable-default-cni/Start 77.21
339 TestNetworkPlugins/group/calico/DNS 0.18
340 TestNetworkPlugins/group/calico/Localhost 0.15
341 TestNetworkPlugins/group/calico/HairPin 0.14
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
344 TestNetworkPlugins/group/custom-flannel/DNS 0.17
345 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
346 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
347 TestNetworkPlugins/group/flannel/Start 60.06
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.02
349 TestNetworkPlugins/group/bridge/Start 36.58
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
352 TestStartStop/group/embed-certs/serial/Pause 2.95
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
354 TestNetworkPlugins/group/bridge/NetCatPod 10.28
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.32
357 TestNetworkPlugins/group/flannel/ControllerPod 5.02
358 TestNetworkPlugins/group/bridge/DNS 0.16
359 TestNetworkPlugins/group/bridge/Localhost 0.13
360 TestNetworkPlugins/group/bridge/HairPin 0.14
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
363 TestNetworkPlugins/group/flannel/NetCatPod 9.34
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
366 TestNetworkPlugins/group/flannel/DNS 0.19
367 TestNetworkPlugins/group/flannel/Localhost 0.15
368 TestNetworkPlugins/group/flannel/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (8.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-763731 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-763731 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.179373385s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-763731
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-763731: exit status 85 (55.369848ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-763731 | jenkins | v1.31.1 | 31 Jul 23 10:54 UTC |          |
	|         | -p download-only-763731        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:54:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:54:46.196359   15658 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:54:46.196486   15658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:54:46.196496   15658 out.go:309] Setting ErrFile to fd 2...
	I0731 10:54:46.196504   15658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:54:46.196703   15658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	W0731 10:54:46.196821   15658 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16968-8855/.minikube/config/config.json: open /home/jenkins/minikube-integration/16968-8855/.minikube/config/config.json: no such file or directory
	I0731 10:54:46.197369   15658 out.go:303] Setting JSON to true
	I0731 10:54:46.198145   15658 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2237,"bootTime":1690798649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 10:54:46.198198   15658 start.go:138] virtualization: kvm guest
	I0731 10:54:46.200776   15658 out.go:97] [download-only-763731] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 10:54:46.202410   15658 out.go:169] MINIKUBE_LOCATION=16968
	W0731 10:54:46.200910   15658 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 10:54:46.200950   15658 notify.go:220] Checking for updates...
	I0731 10:54:46.205330   15658 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:54:46.206808   15658 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 10:54:46.208348   15658 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 10:54:46.209713   15658 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 10:54:46.212496   15658 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 10:54:46.212792   15658 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 10:54:46.233480   15658 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:54:46.233542   15658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:54:46.579662   15658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:54:46.571659389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:54:46.579771   15658 docker.go:294] overlay module found
	I0731 10:54:46.581697   15658 out.go:97] Using the docker driver based on user configuration
	I0731 10:54:46.581772   15658 start.go:298] selected driver: docker
	I0731 10:54:46.581787   15658 start.go:898] validating driver "docker" against <nil>
	I0731 10:54:46.581882   15658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:54:46.636312   15658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:54:46.628536957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:54:46.636477   15658 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0731 10:54:46.636897   15658 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0731 10:54:46.637027   15658 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 10:54:46.638982   15658 out.go:169] Using Docker driver with root privileges
	I0731 10:54:46.640461   15658 cni.go:84] Creating CNI manager for ""
	I0731 10:54:46.640480   15658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 10:54:46.640492   15658 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 10:54:46.640501   15658 start_flags.go:319] config:
	{Name:download-only-763731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-763731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:54:46.641932   15658 out.go:97] Starting control plane node download-only-763731 in cluster download-only-763731
	I0731 10:54:46.641954   15658 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 10:54:46.643310   15658 out.go:97] Pulling base image ...
	I0731 10:54:46.643342   15658 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0731 10:54:46.643368   15658 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:54:46.658570   15658 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:54:46.658737   15658 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:54:46.658817   15658 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:54:46.666106   15658 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0731 10:54:46.666126   15658 cache.go:57] Caching tarball of preloaded images
	I0731 10:54:46.666257   15658 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0731 10:54:46.668137   15658 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0731 10:54:46.668157   15658 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 10:54:46.708355   15658 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0731 10:54:49.617584   15658 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 10:54:50.624451   15658 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 10:54:50.624535   15658 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-763731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (53.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-763731 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-763731 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (53.46797546s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (53.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-763731
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-763731: exit status 85 (57.415846ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-763731 | jenkins | v1.31.1 | 31 Jul 23 10:54 UTC |          |
	|         | -p download-only-763731        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-763731 | jenkins | v1.31.1 | 31 Jul 23 10:54 UTC |          |
	|         | -p download-only-763731        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/31 10:54:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 10:54:54.433783   15815 out.go:296] Setting OutFile to fd 1 ...
	I0731 10:54:54.433892   15815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:54:54.433904   15815 out.go:309] Setting ErrFile to fd 2...
	I0731 10:54:54.433911   15815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 10:54:54.434111   15815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	W0731 10:54:54.434238   15815 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16968-8855/.minikube/config/config.json: open /home/jenkins/minikube-integration/16968-8855/.minikube/config/config.json: no such file or directory
	I0731 10:54:54.434630   15815 out.go:303] Setting JSON to true
	I0731 10:54:54.435375   15815 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2246,"bootTime":1690798649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 10:54:54.435430   15815 start.go:138] virtualization: kvm guest
	I0731 10:54:54.437445   15815 out.go:97] [download-only-763731] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 10:54:54.439595   15815 out.go:169] MINIKUBE_LOCATION=16968
	I0731 10:54:54.437811   15815 notify.go:220] Checking for updates...
	I0731 10:54:54.442810   15815 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 10:54:54.444239   15815 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 10:54:54.445462   15815 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 10:54:54.446638   15815 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 10:54:54.449084   15815 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 10:54:54.449496   15815 config.go:182] Loaded profile config "download-only-763731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0731 10:54:54.449549   15815 start.go:806] api.Load failed for download-only-763731: filestore "download-only-763731": Docker machine "download-only-763731" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 10:54:54.449654   15815 driver.go:373] Setting default libvirt URI to qemu:///system
	W0731 10:54:54.449690   15815 start.go:806] api.Load failed for download-only-763731: filestore "download-only-763731": Docker machine "download-only-763731" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0731 10:54:54.471078   15815 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 10:54:54.471138   15815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:54:54.523547   15815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:54:54.514970952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:54:54.523636   15815 docker.go:294] overlay module found
	I0731 10:54:54.525030   15815 out.go:97] Using the docker driver based on existing profile
	I0731 10:54:54.525053   15815 start.go:298] selected driver: docker
	I0731 10:54:54.525057   15815 start.go:898] validating driver "docker" against &{Name:download-only-763731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-763731 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0731 10:54:54.525188   15815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 10:54:54.575500   15815 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-31 10:54:54.567703309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 10:54:54.576124   15815 cni.go:84] Creating CNI manager for ""
	I0731 10:54:54.576139   15815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0731 10:54:54.576148   15815 start_flags.go:319] config:
	{Name:download-only-763731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-763731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 10:54:54.578002   15815 out.go:97] Starting control plane node download-only-763731 in cluster download-only-763731
	I0731 10:54:54.578035   15815 cache.go:122] Beginning downloading kic base image for docker with crio
	I0731 10:54:54.579638   15815 out.go:97] Pulling base image ...
	I0731 10:54:54.579667   15815 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 10:54:54.579712   15815 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0731 10:54:54.594604   15815 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0731 10:54:54.594783   15815 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0731 10:54:54.594805   15815 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0731 10:54:54.594814   15815 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0731 10:54:54.594824   15815 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0731 10:54:54.601868   15815 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0731 10:54:54.601897   15815 cache.go:57] Caching tarball of preloaded images
	I0731 10:54:54.602042   15815 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0731 10:54:54.603868   15815 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0731 10:54:54.603899   15815 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 10:54:54.633226   15815 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:36a3ccedce25b36b9ffc5201ce124dec -> /home/jenkins/minikube-integration/16968-8855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-763731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-763731
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.16s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-811246 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-811246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-811246
--- PASS: TestDownloadOnlyKic (1.16s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-049634 --alsologtostderr --binary-mirror http://127.0.0.1:46319 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-049634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-049634
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestOffline (52.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-971727 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-971727 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (50.55471365s)
helpers_test.go:175: Cleaning up "offline-crio-971727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-971727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-971727: (2.430868665s)
--- PASS: TestOffline (52.99s)

                                                
                                    
x
+
TestAddons/Setup (118.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-650980 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-650980 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m58.524742465s)
--- PASS: TestAddons/Setup (118.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 12.086537ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-x5bdf" [6962bb94-2f59-41ec-bca8-678f12148c60] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011364574s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b5pm4" [078f5480-bed3-4b63-b9f7-1c4c8b23ba61] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013288273s
addons_test.go:316: (dbg) Run:  kubectl --context addons-650980 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-650980 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-650980 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.777598226s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 ip
2023/07/31 10:58:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4mnw8" [bbf14c53-9df5-4ccf-97d3-a0a0d50ab35a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012251658s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-650980
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-650980: (5.712297542s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 14.10207ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-vxkfw" [c5548274-ef0a-43e9-b8c4-8bd2a19fca62] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011714312s
addons_test.go:391: (dbg) Run:  kubectl --context addons-650980 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 14.290439ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-tdbg5" [b35bc853-fbe0-4fb0-8648-4be6b0ffbac4] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01150941s
addons_test.go:449: (dbg) Run:  kubectl --context addons-650980 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-650980 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.985954108s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p addons-650980 addons disable helm-tiller --alsologtostderr -v=1: (1.300242509s)
--- PASS: TestAddons/parallel/HelmTiller (10.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 7.49596ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-650980 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-650980 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eb546d51-9b7b-40f4-a4f4-bb44c135f627] Pending
helpers_test.go:344: "task-pv-pod" [eb546d51-9b7b-40f4-a4f4-bb44c135f627] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eb546d51-9b7b-40f4-a4f4-bb44c135f627] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.01114194s
addons_test.go:560: (dbg) Run:  kubectl --context addons-650980 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-650980 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-650980 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-650980 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-650980 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-650980 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-650980 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-650980 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-650980 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d6480377-e89b-4526-ad31-cc2bf0bf0075] Pending
helpers_test.go:344: "task-pv-pod-restore" [d6480377-e89b-4526-ad31-cc2bf0bf0075] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.008647898s
addons_test.go:602: (dbg) Run:  kubectl --context addons-650980 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-650980 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-650980 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-650980 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.565131484s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-650980 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-650980 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-650980 --alsologtostderr -v=1: (1.610956736s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-jqbhg" [02018234-e290-4922-a3c1-b326f7f4af62] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-jqbhg" [02018234-e290-4922-a3c1-b326f7f4af62] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.066714121s
--- PASS: TestAddons/parallel/Headlamp (11.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-mnqm2" [e2304f54-1ea7-41eb-a91c-50c66e0426a6] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008446627s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-650980
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-650980 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-650980 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-650980
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-650980: (11.848468396s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-650980
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-650980
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-650980
--- PASS: TestAddons/StoppedEnableDisable (12.07s)

                                                
                                    
x
+
TestCertOptions (29.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-937414 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-937414 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.356399389s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-937414 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-937414 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-937414 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-937414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-937414
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-937414: (3.788667844s)
--- PASS: TestCertOptions (29.76s)

                                                
                                    
x
+
TestCertExpiration (236.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-735531 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-735531 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.696171876s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-735531 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-735531 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.9850818s)
helpers_test.go:175: Cleaning up "cert-expiration-735531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-735531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-735531: (3.909218027s)
--- PASS: TestCertExpiration (236.59s)

                                                
                                    
x
+
TestForceSystemdFlag (24.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-101131 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-101131 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.222590001s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-101131 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-101131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-101131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-101131: (2.313913863s)
--- PASS: TestForceSystemdFlag (24.79s)

                                                
                                    
x
+
TestForceSystemdEnv (42.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-018052 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-018052 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.635093868s)
helpers_test.go:175: Cleaning up "force-systemd-env-018052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-018052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-018052: (2.659774065s)
--- PASS: TestForceSystemdEnv (42.30s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.92s)

                                                
                                    
x
+
TestErrorSpam/setup (21.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-743676 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-743676 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-743676 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-743676 --driver=docker  --container-runtime=crio: (21.265262337s)
--- PASS: TestErrorSpam/setup (21.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 stop: (1.178494933s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-743676 --log_dir /tmp/nospam-743676 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16968-8855/.minikube/files/etc/test/nested/copy/15646/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-671868 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.333074924s)
--- PASS: TestFunctional/serial/StartWithProxy (36.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --alsologtostderr -v=8
E0731 11:02:48.722396   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:48.728100   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:48.738334   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:48.758569   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:48.798875   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:48.879175   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:49.040187   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:49.360896   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:50.001595   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:51.281769   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:53.842897   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:02:58.963100   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-671868 --alsologtostderr -v=8: (42.984883597s)
functional_test.go:659: soft start took 42.985567981s for "functional-671868" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-671868 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-671868 /tmp/TestFunctionalserialCacheCmdcacheadd_local4175073283/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache add minikube-local-cache-test:functional-671868
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache delete minikube-local-cache-test:functional-671868
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-671868
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (265.07696ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 kubectl -- --context functional-671868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-671868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 11:03:09.203453   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:03:29.684200   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-671868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.48741082s)
functional_test.go:757: restart took 33.487527149s for "functional-671868" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-671868 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 logs: (1.322592126s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 logs --file /tmp/TestFunctionalserialLogsFileCmd4152990298/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 logs --file /tmp/TestFunctionalserialLogsFileCmd4152990298/001/logs.txt: (1.318391596s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-671868 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-671868
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-671868: exit status 115 (314.931837ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31770 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-671868 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 config get cpus: exit status 14 (66.690331ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 config get cpus: exit status 14 (40.770654ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-671868 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-671868 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50239: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-671868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (148.136825ms)

                                                
                                                
-- stdout --
	* [functional-671868] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:04:00.400047   49554 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:04:00.400185   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:00.400194   49554 out.go:309] Setting ErrFile to fd 2...
	I0731 11:04:00.400198   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:00.400430   49554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:04:00.401079   49554 out.go:303] Setting JSON to false
	I0731 11:04:00.402179   49554 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2791,"bootTime":1690798649,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:04:00.402250   49554 start.go:138] virtualization: kvm guest
	I0731 11:04:00.404737   49554 out.go:177] * [functional-671868] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:04:00.406965   49554 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:04:00.407003   49554 notify.go:220] Checking for updates...
	I0731 11:04:00.408422   49554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:04:00.410037   49554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:04:00.411694   49554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:04:00.413385   49554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:04:00.415158   49554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:04:00.417686   49554 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:04:00.418158   49554 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:04:00.440264   49554 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:04:00.440369   49554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:04:00.493005   49554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-31 11:04:00.48413469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:04:00.493132   49554 docker.go:294] overlay module found
	I0731 11:04:00.494870   49554 out.go:177] * Using the docker driver based on existing profile
	I0731 11:04:00.496288   49554 start.go:298] selected driver: docker
	I0731 11:04:00.496309   49554 start.go:898] validating driver "docker" against &{Name:functional-671868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-671868 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:04:00.496412   49554 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:04:00.498645   49554 out.go:177] 
	W0731 11:04:00.500250   49554 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 11:04:00.501866   49554 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-671868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-671868 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (158.26745ms)

                                                
                                                
-- stdout --
	* [functional-671868] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:04:00.776495   49754 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:04:00.776966   49754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:00.776985   49754 out.go:309] Setting ErrFile to fd 2...
	I0731 11:04:00.776993   49754 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:04:00.777602   49754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:04:00.778730   49754 out.go:303] Setting JSON to false
	I0731 11:04:00.779704   49754 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2792,"bootTime":1690798649,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:04:00.779764   49754 start.go:138] virtualization: kvm guest
	I0731 11:04:00.781949   49754 out.go:177] * [functional-671868] minikube v1.31.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 11:04:00.784232   49754 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:04:00.784269   49754 notify.go:220] Checking for updates...
	I0731 11:04:00.785736   49754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:04:00.787336   49754 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:04:00.789011   49754 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:04:00.790589   49754 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:04:00.792055   49754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:04:00.794068   49754 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:04:00.794697   49754 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:04:00.819259   49754 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:04:00.819347   49754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:04:00.877636   49754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-31 11:04:00.868526051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:04:00.877774   49754 docker.go:294] overlay module found
	I0731 11:04:00.880003   49754 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0731 11:04:00.881596   49754 start.go:298] selected driver: docker
	I0731 11:04:00.881609   49754 start.go:898] validating driver "docker" against &{Name:functional-671868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-671868 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0731 11:04:00.881698   49754 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:04:00.884056   49754 out.go:177] 
	W0731 11:04:00.885647   49754 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 11:04:00.887103   49754 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-671868 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-671868 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-plvxq" [bd66a8bb-baf4-4238-86f9-0f6a2b6f4131] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-plvxq" [bd66a8bb-baf4-4238-86f9-0f6a2b6f4131] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.009932522s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32187
functional_test.go:1674: http://192.168.49.2:32187: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-plvxq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32187
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9cd75e12-9274-4666-b383-c5351e24eede] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022851341s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-671868 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-671868 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-671868 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-671868 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2b4cecaa-9396-4f63-b73e-084d3929b987] Pending
helpers_test.go:344: "sp-pod" [2b4cecaa-9396-4f63-b73e-084d3929b987] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2b4cecaa-9396-4f63-b73e-084d3929b987] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.05585806s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-671868 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-671868 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-671868 delete -f testdata/storage-provisioner/pod.yaml: (1.271351031s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-671868 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23818853-f384-4732-ad67-10b0040bac3a] Pending
helpers_test.go:344: "sp-pod" [23818853-f384-4732-ad67-10b0040bac3a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23818853-f384-4732-ad67-10b0040bac3a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.039966833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-671868 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh -n functional-671868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 cp functional-671868:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd75758989/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh -n functional-671868 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-671868 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-6tbgt" [3b78633f-4bb1-41ec-ac7a-894f495c7892] Pending
helpers_test.go:344: "mysql-7db894d786-6tbgt" [3b78633f-4bb1-41ec-ac7a-894f495c7892] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0731 11:04:10.645249   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
helpers_test.go:344: "mysql-7db894d786-6tbgt" [3b78633f-4bb1-41ec-ac7a-894f495c7892] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.01314887s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-671868 exec mysql-7db894d786-6tbgt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-671868 exec mysql-7db894d786-6tbgt -- mysql -ppassword -e "show databases;": exit status 1 (134.091244ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-671868 exec mysql-7db894d786-6tbgt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-671868 exec mysql-7db894d786-6tbgt -- mysql -ppassword -e "show databases;": exit status 1 (137.047953ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-671868 exec mysql-7db894d786-6tbgt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.73s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15646/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /etc/test/nested/copy/15646/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15646.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /etc/ssl/certs/15646.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15646.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /usr/share/ca-certificates/15646.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/156462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /etc/ssl/certs/156462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/156462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /usr/share/ca-certificates/156462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-671868 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "sudo systemctl is-active docker": exit status 1 (268.107022ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "sudo systemctl is-active containerd": exit status 1 (282.198708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-671868 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-671868 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-6lk6k" [5e45926b-d836-4a51-8118-f7de5dabf768] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-6lk6k" [5e45926b-d836-4a51-8118-f7de5dabf768] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.011972404s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46404: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-671868 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7f3781b2-24cc-4001-8611-bba97faa1b5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7f3781b2-24cc-4001-8611-bba97faa1b5f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.010833548s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service list -o json
functional_test.go:1493: Took "463.119737ms" to run "out/minikube-linux-amd64 -p functional-671868 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31407
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-671868 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.166.87 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-671868 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31407
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 version -o=json --components: (1.065378068s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-671868 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-671868
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-671868 image ls --format short --alsologtostderr:
I0731 11:04:25.743043   53853 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:25.743636   53853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.743649   53853 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:25.743655   53853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.744007   53853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:25.744717   53853 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.744808   53853 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.745164   53853 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:25.765497   53853 ssh_runner.go:195] Run: systemctl --version
I0731 11:04:25.765568   53853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-671868
I0731 11:04:25.781924   53853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/functional-671868/id_rsa Username:docker}
I0731 11:04:25.871985   53853 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-671868 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | d7b085374dbc1 | 601MB  |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 4937520ae206c | 43.2MB |
| docker.io/library/nginx                 | latest             | 89da1fb6dcb96 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-671868  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-671868 image ls --format table --alsologtostderr:
I0731 11:04:25.958333   54020 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:25.958435   54020 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.958445   54020 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:25.958449   54020 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.958651   54020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:25.959311   54020 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.959456   54020 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.960048   54020 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:25.980808   54020 ssh_runner.go:195] Run: systemctl --version
I0731 11:04:25.980859   54020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-671868
I0731 11:04:26.001092   54020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/functional-671868/id_rsa Username:docker}
I0731 11:04:26.095916   54020 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-671868 image ls --format json --alsologtostderr:
[{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43220780"},{"id":"89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287","repoDigests":["docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca","docker.io/library/nginx@sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7"],"repoTags":["docker.io/library/nginx:latest"],"size":"191049983"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-671868"],"size":"34114467"},{"id":"6e38f40d628db3
002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"d7b085374dbc1ca6ee83a18b488b9da0425749c87051e8bd8287dc2a2c775ecb","repoDigests":["docker.io/library/mysql@sha256:2eabad08824e3120dbec9096c276e3956e1922636c06fbb383ae9ea9c499bf43","docker.io/library/mysql@sha256:8e044d43c8d38550dc1c935a0797f76adfa55024dd075f30161602395f99f0ca"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601272484"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c
1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-prox
y:v1.27.3"],"size":"72713623"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"11505
3965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06",
"repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-671868 image ls --format json --alsologtostderr:
I0731 11:04:25.961603   54019 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:25.961697   54019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.961701   54019 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:25.961705   54019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.961945   54019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:25.962503   54019 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.962601   54019 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.962995   54019 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:25.981958   54019 ssh_runner.go:195] Run: systemctl --version
I0731 11:04:25.982008   54019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-671868
I0731 11:04:26.002436   54019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/functional-671868/id_rsa Username:docker}
I0731 11:04:26.096350   54019 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-671868 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d7b085374dbc1ca6ee83a18b488b9da0425749c87051e8bd8287dc2a2c775ecb
repoDigests:
- docker.io/library/mysql@sha256:2eabad08824e3120dbec9096c276e3956e1922636c06fbb383ae9ea9c499bf43
- docker.io/library/mysql@sha256:8e044d43c8d38550dc1c935a0797f76adfa55024dd075f30161602395f99f0ca
repoTags:
- docker.io/library/mysql:5.7
size: "601272484"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a
repoTags:
- docker.io/library/nginx:alpine
size: "43220780"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 89da1fb6dcb964dd35c3f41b7b93ffc35eaf20bc61f2e1335fea710a18424287
repoDigests:
- docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca
- docker.io/library/nginx@sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7
repoTags:
- docker.io/library/nginx:latest
size: "191049983"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-671868
size: "34114467"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-671868 image ls --format yaml --alsologtostderr:
I0731 11:04:25.741425   53855 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:25.741582   53855 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.741593   53855 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:25.741600   53855 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:25.741889   53855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:25.742755   53855 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.742956   53855 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:25.743491   53855 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:25.762835   53855 ssh_runner.go:195] Run: systemctl --version
I0731 11:04:25.762873   53855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-671868
I0731 11:04:25.780723   53855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/functional-671868/id_rsa Username:docker}
I0731 11:04:25.867842   53855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh pgrep buildkitd: exit status 1 (252.270567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image build -t localhost/my-image:functional-671868 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image build -t localhost/my-image:functional-671868 testdata/build --alsologtostderr: (1.414517149s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-671868 image build -t localhost/my-image:functional-671868 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 85b39bc9ca7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-671868
--> e33e3a3bc82
Successfully tagged localhost/my-image:functional-671868
e33e3a3bc825fe8e26f3451ba932691a69be9a5baca5f2447074fa6ce2badd44
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-671868 image build -t localhost/my-image:functional-671868 testdata/build --alsologtostderr:
I0731 11:04:26.002243   54048 out.go:296] Setting OutFile to fd 1 ...
I0731 11:04:26.002402   54048 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:26.002411   54048 out.go:309] Setting ErrFile to fd 2...
I0731 11:04:26.002416   54048 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0731 11:04:26.002644   54048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
I0731 11:04:26.003200   54048 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:26.003842   54048 config.go:182] Loaded profile config "functional-671868": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0731 11:04:26.004294   54048 cli_runner.go:164] Run: docker container inspect functional-671868 --format={{.State.Status}}
I0731 11:04:26.020955   54048 ssh_runner.go:195] Run: systemctl --version
I0731 11:04:26.021003   54048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-671868
I0731 11:04:26.037880   54048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/functional-671868/id_rsa Username:docker}
I0731 11:04:26.132720   54048 build_images.go:151] Building image from path: /tmp/build.3453360102.tar
I0731 11:04:26.132775   54048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 11:04:26.141773   54048 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3453360102.tar
I0731 11:04:26.144867   54048 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3453360102.tar: stat -c "%s %y" /var/lib/minikube/build/build.3453360102.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3453360102.tar': No such file or directory
I0731 11:04:26.144895   54048 ssh_runner.go:362] scp /tmp/build.3453360102.tar --> /var/lib/minikube/build/build.3453360102.tar (3072 bytes)
I0731 11:04:26.166275   54048 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3453360102
I0731 11:04:26.173858   54048 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3453360102 -xf /var/lib/minikube/build/build.3453360102.tar
I0731 11:04:26.182196   54048 crio.go:297] Building image: /var/lib/minikube/build/build.3453360102
I0731 11:04:26.182245   54048 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-671868 /var/lib/minikube/build/build.3453360102 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 11:04:27.338565   54048 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-671868 /var/lib/minikube/build/build.3453360102 --cgroup-manager=cgroupfs: (1.156289287s)
I0731 11:04:27.338632   54048 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3453360102
I0731 11:04:27.346727   54048 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3453360102.tar
I0731 11:04:27.354523   54048 build_images.go:207] Built localhost/my-image:functional-671868 from /tmp/build.3453360102.tar
I0731 11:04:27.354553   54048 build_images.go:123] succeeded building to: functional-671868
I0731 11:04:27.354559   54048 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.008047304s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-671868
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "309.136742ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "49.91618ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr: (5.434720523s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "354.05629ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "94.547719ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdany-port2684902654/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1690801438400421898" to /tmp/TestFunctionalparallelMountCmdany-port2684902654/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1690801438400421898" to /tmp/TestFunctionalparallelMountCmdany-port2684902654/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1690801438400421898" to /tmp/TestFunctionalparallelMountCmdany-port2684902654/001/test-1690801438400421898
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.960863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 11:03 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 11:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 11:03 test-1690801438400421898
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh cat /mount-9p/test-1690801438400421898
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-671868 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7abb9c7b-6cfd-4aa7-9c57-a841edc7e138] Pending
helpers_test.go:344: "busybox-mount" [7abb9c7b-6cfd-4aa7-9c57-a841edc7e138] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7abb9c7b-6cfd-4aa7-9c57-a841edc7e138] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7abb9c7b-6cfd-4aa7-9c57-a841edc7e138] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011215086s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-671868 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdany-port2684902654/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr: (3.351710675s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-671868
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr
2023/07/31 11:04:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image load --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr: (9.11914631s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T" /mount1: exit status 1 (384.651081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-671868 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-671868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3514474289/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image save gcr.io/google-containers/addon-resizer:functional-671868 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image save gcr.io/google-containers/addon-resizer:functional-671868 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.198672546s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image rm gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-671868
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-671868 image save --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-671868 image save --daemon gcr.io/google-containers/addon-resizer:functional-671868 --alsologtostderr: (2.119061479s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-671868
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-671868
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-671868
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-671868
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (66.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-033299 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0731 11:05:32.565844   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-033299 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m6.841065832s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (66.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons enable ingress --alsologtostderr -v=5: (10.265368795s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-033299 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-831932 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0731 11:08:56.388488   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:09:06.629415   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:09:27.110297   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-831932 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.417044597s)
--- PASS: TestJSONOutput/start/Command (66.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-831932 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-831932 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-831932 --output=json --user=testUser
E0731 11:10:08.071856   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-831932 --output=json --user=testUser: (5.696946759s)
--- PASS: TestJSONOutput/stop/Command (5.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-794563 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-794563 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.795459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a8dbb15-7ec7-4038-9601-88542e0bce3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-794563] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33e76105-83a4-4b73-905b-37b21dabd6ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"c30a0cb8-6a57-4859-aec6-f259c8c1c7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4936163b-a941-4078-be52-46a9a9992dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig"}}
	{"specversion":"1.0","id":"e6efcdab-8878-4a73-a5ca-b4edfc4f1fd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube"}}
	{"specversion":"1.0","id":"8c4c43e8-528f-4c66-857b-713694c44c2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ead255fa-d39a-4410-ac4c-d1a532db46a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"488fd0a3-744d-469a-9c6c-da88165e1a38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-794563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-794563
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-763755 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-763755 --network=: (27.846683572s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-763755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-763755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-763755: (2.025736796s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-762694 --network=bridge
E0731 11:10:53.196038   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.203983   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.214347   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.235289   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.275470   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.355659   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.516038   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:53.836611   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:54.477507   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:55.758000   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:10:58.319996   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:11:03.440336   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-762694 --network=bridge: (22.668599068s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-762694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-762694
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-762694: (1.925744308s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.61s)

                                                
                                    
x
+
TestKicExistingNetwork (26.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-075221 --network=existing-network
E0731 11:11:13.681461   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
E0731 11:11:29.992582   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-075221 --network=existing-network: (24.643830125s)
helpers_test.go:175: Cleaning up "existing-network-075221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-075221
E0731 11:11:34.162094   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-075221: (1.934545189s)
--- PASS: TestKicExistingNetwork (26.71s)

                                                
                                    
x
+
TestKicCustomSubnet (26.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-720757 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-720757 --subnet=192.168.60.0/24: (24.531534434s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-720757 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-720757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-720757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-720757: (2.066036821s)
--- PASS: TestKicCustomSubnet (26.62s)

                                                
                                    
x
+
TestKicStaticIP (24.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-963229 --static-ip=192.168.200.200
E0731 11:12:15.122703   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-963229 --static-ip=192.168.200.200: (22.607471821s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-963229 ip
helpers_test.go:175: Cleaning up "static-ip-963229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-963229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-963229: (1.973412383s)
--- PASS: TestKicStaticIP (24.70s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-354142 --driver=docker  --container-runtime=crio
E0731 11:12:48.724029   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-354142 --driver=docker  --container-runtime=crio: (24.465130441s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-356545 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-356545 --driver=docker  --container-runtime=crio: (23.2085657s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-354142
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-356545
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-356545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-356545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-356545: (1.842997672s)
helpers_test.go:175: Cleaning up "first-354142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-354142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-354142: (1.83368075s)
--- PASS: TestMinikubeProfile (52.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-861088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-861088 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.911553301s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-861088 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-906460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-906460 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.813779314s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906460 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-861088 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-861088 --alsologtostderr -v=5: (1.598623628s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906460 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-906460
E0731 11:13:37.043123   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-906460: (1.18396706s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-906460
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-906460: (6.111940424s)
--- PASS: TestMountStart/serial/RestartStopped (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-906460 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-249026 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 11:14:13.832913   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-249026 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m25.497143964s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-249026 -- rollout status deployment/busybox: (1.615933736s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-fvzbv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-249026 -- exec busybox-67b7f59bb-nhmrt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.24s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-249026 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-249026 -v 3 --alsologtostderr: (15.339002169s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.91s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp testdata/cp-test.txt multinode-249026:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609458641/001/cp-test_multinode-249026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026:/home/docker/cp-test.txt multinode-249026-m02:/home/docker/cp-test_multinode-249026_multinode-249026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test_multinode-249026_multinode-249026-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026:/home/docker/cp-test.txt multinode-249026-m03:/home/docker/cp-test_multinode-249026_multinode-249026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test_multinode-249026_multinode-249026-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp testdata/cp-test.txt multinode-249026-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609458641/001/cp-test_multinode-249026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m02:/home/docker/cp-test.txt multinode-249026:/home/docker/cp-test_multinode-249026-m02_multinode-249026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test_multinode-249026-m02_multinode-249026.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m02:/home/docker/cp-test.txt multinode-249026-m03:/home/docker/cp-test_multinode-249026-m02_multinode-249026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test_multinode-249026-m02_multinode-249026-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp testdata/cp-test.txt multinode-249026-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3609458641/001/cp-test_multinode-249026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m03:/home/docker/cp-test.txt multinode-249026:/home/docker/cp-test_multinode-249026-m03_multinode-249026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026 "sudo cat /home/docker/cp-test_multinode-249026-m03_multinode-249026.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 cp multinode-249026-m03:/home/docker/cp-test.txt multinode-249026-m02:/home/docker/cp-test_multinode-249026-m03_multinode-249026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 ssh -n multinode-249026-m02 "sudo cat /home/docker/cp-test_multinode-249026-m03_multinode-249026-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-249026 node stop m03: (1.180611617s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-249026 status: exit status 7 (444.70473ms)

                                                
                                                
-- stdout --
	multinode-249026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-249026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-249026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr: exit status 7 (437.388178ms)

                                                
                                                
-- stdout --
	multinode-249026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-249026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-249026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:15:45.457464  113509 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:15:45.457602  113509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:15:45.457612  113509 out.go:309] Setting ErrFile to fd 2...
	I0731 11:15:45.457619  113509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:15:45.457873  113509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:15:45.458073  113509 out.go:303] Setting JSON to false
	I0731 11:15:45.458107  113509 mustload.go:65] Loading cluster: multinode-249026
	I0731 11:15:45.458207  113509 notify.go:220] Checking for updates...
	I0731 11:15:45.458499  113509 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:15:45.458514  113509 status.go:255] checking status of multinode-249026 ...
	I0731 11:15:45.458890  113509 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:15:45.475532  113509 status.go:330] multinode-249026 host status = "Running" (err=<nil>)
	I0731 11:15:45.475570  113509 host.go:66] Checking if "multinode-249026" exists ...
	I0731 11:15:45.475820  113509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026
	I0731 11:15:45.491535  113509 host.go:66] Checking if "multinode-249026" exists ...
	I0731 11:15:45.491786  113509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:15:45.491827  113509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026
	I0731 11:15:45.509618  113509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026/id_rsa Username:docker}
	I0731 11:15:45.596751  113509 ssh_runner.go:195] Run: systemctl --version
	I0731 11:15:45.600547  113509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:15:45.610783  113509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:15:45.660898  113509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-07-31 11:15:45.652886404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:15:45.661649  113509 kubeconfig.go:92] found "multinode-249026" server: "https://192.168.58.2:8443"
	I0731 11:15:45.661674  113509 api_server.go:166] Checking apiserver status ...
	I0731 11:15:45.661710  113509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 11:15:45.671506  113509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1446/cgroup
	I0731 11:15:45.679528  113509 api_server.go:182] apiserver freezer: "7:freezer:/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio/crio-f74bae863be8f53a453e7c3463ca4704e6669d3adae7cd2470e722f45dd73d1f"
	I0731 11:15:45.679575  113509 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6c9e307d8fbb6aa504e1b1671d79e2f602df444dc5ede76a67d98aec5cb168ff/crio/crio-f74bae863be8f53a453e7c3463ca4704e6669d3adae7cd2470e722f45dd73d1f/freezer.state
	I0731 11:15:45.686793  113509 api_server.go:204] freezer state: "THAWED"
	I0731 11:15:45.686812  113509 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0731 11:15:45.692675  113509 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0731 11:15:45.692696  113509 status.go:421] multinode-249026 apiserver status = Running (err=<nil>)
	I0731 11:15:45.692704  113509 status.go:257] multinode-249026 status: &{Name:multinode-249026 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:15:45.692719  113509 status.go:255] checking status of multinode-249026-m02 ...
	I0731 11:15:45.692932  113509 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Status}}
	I0731 11:15:45.709073  113509 status.go:330] multinode-249026-m02 host status = "Running" (err=<nil>)
	I0731 11:15:45.709098  113509 host.go:66] Checking if "multinode-249026-m02" exists ...
	I0731 11:15:45.709315  113509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-249026-m02
	I0731 11:15:45.725380  113509 host.go:66] Checking if "multinode-249026-m02" exists ...
	I0731 11:15:45.725623  113509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:15:45.725680  113509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-249026-m02
	I0731 11:15:45.741102  113509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16968-8855/.minikube/machines/multinode-249026-m02/id_rsa Username:docker}
	I0731 11:15:45.828798  113509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 11:15:45.838826  113509 status.go:257] multinode-249026-m02 status: &{Name:multinode-249026-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:15:45.838868  113509 status.go:255] checking status of multinode-249026-m03 ...
	I0731 11:15:45.839101  113509 cli_runner.go:164] Run: docker container inspect multinode-249026-m03 --format={{.State.Status}}
	I0731 11:15:45.856073  113509 status.go:330] multinode-249026-m03 host status = "Stopped" (err=<nil>)
	I0731 11:15:45.856092  113509 status.go:343] host is not running, skipping remaining checks
	I0731 11:15:45.856099  113509 status.go:257] multinode-249026-m03 status: &{Name:multinode-249026-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 node start m03 --alsologtostderr
E0731 11:15:53.194823   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-249026 node start m03 --alsologtostderr: (10.074923904s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-249026
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-249026
E0731 11:16:20.884197   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-249026: (24.873132687s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-249026 --wait=true -v=8 --alsologtostderr
E0731 11:17:48.722936   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-249026 --wait=true -v=8 --alsologtostderr: (1m28.057747698s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-249026
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-249026 node delete m03: (4.061481481s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-249026 stop: (23.706645366s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-249026 status: exit status 7 (76.425435ms)

                                                
                                                
-- stdout --
	multinode-249026
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-249026-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr: exit status 7 (77.922791ms)

                                                
                                                
-- stdout --
	multinode-249026
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-249026-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:18:18.046132  123916 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:18:18.046260  123916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:18:18.046268  123916 out.go:309] Setting ErrFile to fd 2...
	I0731 11:18:18.046273  123916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:18:18.046474  123916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:18:18.046632  123916 out.go:303] Setting JSON to false
	I0731 11:18:18.046651  123916 mustload.go:65] Loading cluster: multinode-249026
	I0731 11:18:18.046980  123916 notify.go:220] Checking for updates...
	I0731 11:18:18.048136  123916 config.go:182] Loaded profile config "multinode-249026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:18:18.048166  123916 status.go:255] checking status of multinode-249026 ...
	I0731 11:18:18.048899  123916 cli_runner.go:164] Run: docker container inspect multinode-249026 --format={{.State.Status}}
	I0731 11:18:18.069810  123916 status.go:330] multinode-249026 host status = "Stopped" (err=<nil>)
	I0731 11:18:18.069838  123916 status.go:343] host is not running, skipping remaining checks
	I0731 11:18:18.069844  123916 status.go:257] multinode-249026 status: &{Name:multinode-249026 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:18:18.069882  123916 status.go:255] checking status of multinode-249026-m02 ...
	I0731 11:18:18.070129  123916 cli_runner.go:164] Run: docker container inspect multinode-249026-m02 --format={{.State.Status}}
	I0731 11:18:18.086042  123916 status.go:330] multinode-249026-m02 host status = "Stopped" (err=<nil>)
	I0731 11:18:18.086063  123916 status.go:343] host is not running, skipping remaining checks
	I0731 11:18:18.086069  123916 status.go:257] multinode-249026-m02 status: &{Name:multinode-249026-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-249026 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0731 11:18:46.147536   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
E0731 11:19:11.766942   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-249026 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.789599651s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-249026 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-249026
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-249026-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-249026-m02 --driver=docker  --container-runtime=crio: exit status 14 (59.1991ms)

                                                
                                                
-- stdout --
	* [multinode-249026-m02] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-249026-m02' is duplicated with machine name 'multinode-249026-m02' in profile 'multinode-249026'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-249026-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-249026-m03 --driver=docker  --container-runtime=crio: (21.243696553s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-249026
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-249026: exit status 80 (256.031782ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-249026
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-249026-m03 already exists in multinode-249026-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-249026-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-249026-m03: (1.794874943s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.39s)

                                                
                                    
x
+
TestPreload (125.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-523966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 11:20:53.194889   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-523966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m9.977318808s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-523966 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-523966
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-523966: (5.702767604s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-523966 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-523966 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.446502959s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-523966 image list
helpers_test.go:175: Cleaning up "test-preload-523966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-523966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-523966: (2.236636157s)
--- PASS: TestPreload (125.50s)

                                                
                                    
x
+
TestScheduledStopUnix (97.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-365898 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-365898 --memory=2048 --driver=docker  --container-runtime=crio: (20.932395938s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-365898 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-365898 -n scheduled-stop-365898
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-365898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-365898 --cancel-scheduled
E0731 11:22:48.724598   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-365898 -n scheduled-stop-365898
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-365898
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-365898 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-365898
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-365898: exit status 7 (57.753224ms)

                                                
                                                
-- stdout --
	scheduled-stop-365898
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-365898 -n scheduled-stop-365898
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-365898 -n scheduled-stop-365898: exit status 7 (57.529361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-365898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-365898
E0731 11:23:46.147977   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-365898: (5.423123294s)
--- PASS: TestScheduledStopUnix (97.58s)

                                                
                                    
x
+
TestInsufficientStorage (9.89s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-498727 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-498727 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.627103086s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd5e73a5-16c9-4d94-8c98-6c890012895e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-498727] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2de0a9b-aa6c-4e4f-bee1-fbbe92950440","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16968"}}
	{"specversion":"1.0","id":"f5fa4d3d-10fc-464f-b4e9-64ef92a5efcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"715f2b90-dcbe-4e85-bb84-9ae05ae31cfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig"}}
	{"specversion":"1.0","id":"1f0914ae-a9e8-470a-a47f-6685c46d5d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube"}}
	{"specversion":"1.0","id":"7303d0b2-0312-4272-af9e-356621c5893c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0be4720-1c3c-43aa-be29-e316104184c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7e7576b-4d3c-4cb2-a007-3b8333c328dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"70b6bb84-5d2f-45a2-a7d1-8f0db801cb6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9bb595eb-7121-45b8-be85-cb94bbb160dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"042ec487-2aef-47ff-8e9f-c3a7521832a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2d0c8014-0049-4038-881c-44238850266f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-498727 in cluster insufficient-storage-498727","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5814bbf-25dd-409f-b00c-78af8c1d275f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a237e369-ed60-4295-8adf-6dacb203c067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"27707e35-20e6-4959-9518-f81d85d958b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-498727 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-498727 --output=json --layout=cluster: exit status 7 (250.314753ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-498727","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-498727","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:23:56.286678  145352 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-498727" does not appear in /home/jenkins/minikube-integration/16968-8855/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-498727 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-498727 --output=json --layout=cluster: exit status 7 (245.924929ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-498727","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-498727","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:23:56.533480  145450 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-498727" does not appear in /home/jenkins/minikube-integration/16968-8855/kubeconfig
	E0731 11:23:56.542718  145450 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/insufficient-storage-498727/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-498727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-498727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-498727: (1.765065174s)
--- PASS: TestInsufficientStorage (9.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.977266478s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-182090
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-182090: (3.333713014s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-182090 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-182090 status --format={{.Host}}: exit status 7 (68.949773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0731 11:25:53.194970   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.965249637s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-182090 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (943.446492ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-182090] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-182090
	    minikube start -p kubernetes-upgrade-182090 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1820902 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-182090 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182090 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.567845696s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-182090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-182090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-182090: (2.090174029s)
--- PASS: TestKubernetesUpgrade (361.01s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.1992292445.exe start -p missing-upgrade-098284 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.1992292445.exe start -p missing-upgrade-098284 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.14730694s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-098284
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-098284: (4.222266267s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-098284
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-098284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-098284 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m23.559302975s)
helpers_test.go:175: Cleaning up "missing-upgrade-098284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-098284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-098284: (2.043779021s)
--- PASS: TestMissingContainerUpgrade (174.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (71.235327ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-987170] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987170 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987170 --driver=docker  --container-runtime=crio: (35.405666947s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987170 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --driver=docker  --container-runtime=crio: (5.300888181s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987170 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-987170 status -o json: exit status 2 (299.588648ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-987170","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-987170
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-987170: (2.000853157s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987170 --no-kubernetes --driver=docker  --container-runtime=crio: (10.085658095s)
--- PASS: TestNoKubernetes/serial/Start (10.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987170 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987170 "sudo systemctl is-active --quiet service kubelet": exit status 1 (324.255711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-987170
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-987170: (1.284093775s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987170 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987170 --driver=docker  --container-runtime=crio: (8.377043452s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987170 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987170 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.761659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-841889
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                    
x
+
TestPause/serial/Start (43.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-165255 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-165255 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.455489176s)
--- PASS: TestPause/serial/Start (43.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-165255 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0731 11:27:16.245240   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-165255 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.372992786s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-359242 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-359242 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (153.896115ms)

                                                
                                                
-- stdout --
	* [false-359242] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:27:22.266670  199092 out.go:296] Setting OutFile to fd 1 ...
	I0731 11:27:22.266810  199092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:27:22.266820  199092 out.go:309] Setting ErrFile to fd 2...
	I0731 11:27:22.266826  199092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0731 11:27:22.267085  199092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16968-8855/.minikube/bin
	I0731 11:27:22.267689  199092 out.go:303] Setting JSON to false
	I0731 11:27:22.269379  199092 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4193,"bootTime":1690798649,"procs":855,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1038-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 11:27:22.269449  199092 start.go:138] virtualization: kvm guest
	I0731 11:27:22.272540  199092 out.go:177] * [false-359242] minikube v1.31.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 11:27:22.274353  199092 out.go:177]   - MINIKUBE_LOCATION=16968
	I0731 11:27:22.274295  199092 notify.go:220] Checking for updates...
	I0731 11:27:22.275992  199092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:27:22.277520  199092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16968-8855/kubeconfig
	I0731 11:27:22.279000  199092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16968-8855/.minikube
	I0731 11:27:22.280389  199092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 11:27:22.281812  199092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:27:22.283597  199092 config.go:182] Loaded profile config "cert-expiration-735531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:27:22.283730  199092 config.go:182] Loaded profile config "kubernetes-upgrade-182090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:27:22.283905  199092 config.go:182] Loaded profile config "pause-165255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0731 11:27:22.284017  199092 driver.go:373] Setting default libvirt URI to qemu:///system
	I0731 11:27:22.308647  199092 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0731 11:27:22.308748  199092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 11:27:22.365380  199092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-31 11:27:22.355784761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1038-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0731 11:27:22.365480  199092 docker.go:294] overlay module found
	I0731 11:27:22.367444  199092 out.go:177] * Using the docker driver based on user configuration
	I0731 11:27:22.368916  199092 start.go:298] selected driver: docker
	I0731 11:27:22.368935  199092 start.go:898] validating driver "docker" against <nil>
	I0731 11:27:22.368946  199092 start.go:909] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:27:22.371187  199092 out.go:177] 
	W0731 11:27:22.372688  199092 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 11:27:22.374095  199092 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-359242 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-735531
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:25:50 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-182090
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-165255
contexts:
- context:
cluster: cert-expiration-735531
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: cert-expiration-735531
name: cert-expiration-735531
- context:
cluster: kubernetes-upgrade-182090
user: kubernetes-upgrade-182090
name: kubernetes-upgrade-182090
- context:
cluster: pause-165255
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: pause-165255
name: pause-165255
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-735531
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.key
- name: kubernetes-upgrade-182090
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.key
- name: pause-165255
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-359242

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-359242"

                                                
                                                
----------------------- debugLogs end: false-359242 [took: 2.61123881s] --------------------------------
helpers_test.go:175: Cleaning up "false-359242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-359242
--- PASS: TestNetworkPlugins/group/false (2.90s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-165255 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-165255 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-165255 --output=json --layout=cluster: exit status 2 (292.984368ms)

                                                
                                                
-- stdout --
	{"Name":"pause-165255","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-165255","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-165255 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-165255 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-165255 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-165255 --alsologtostderr -v=5: (3.890043207s)
--- PASS: TestPause/serial/DeletePaused (3.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-165255
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-165255: exit status 1 (16.036913ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-165255: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-597674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0731 11:27:48.722406   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-597674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m4.699197226s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-746597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 11:28:46.147674   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-746597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (51.586189817s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-746597 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [edd67fc6-2057-4608-8a03-93d10ed84f72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [edd67fc6-2057-4608-8a03-93d10ed84f72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016180646s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-746597 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-746597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-746597 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-746597 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-746597 --alsologtostderr -v=3: (11.897356921s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-746597 -n no-preload-746597
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-746597 -n no-preload-746597: exit status 7 (59.852448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-746597 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-746597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-746597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m39.601679451s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-746597 -n no-preload-746597
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-597674 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8b73257e-881c-4e72-930a-8a90bf0cbb69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8b73257e-881c-4e72-930a-8a90bf0cbb69] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014519157s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-597674 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-597674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-597674 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-597674 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-597674 --alsologtostderr -v=3: (11.933993713s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-597674 -n old-k8s-version-597674
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-597674 -n old-k8s-version-597674: exit status 7 (65.990107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-597674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (422.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-597674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-597674 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m1.869258309s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-597674 -n old-k8s-version-597674
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (422.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-785455 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-785455 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m10.56522324s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-245676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 11:30:53.194736   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-245676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (35.697565362s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-245676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-245676 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-245676 --alsologtostderr -v=3: (5.698685997s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245676 -n newest-cni-245676
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245676 -n newest-cni-245676: exit status 7 (63.72707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-245676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-245676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-245676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (26.035756567s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-245676 -n newest-cni-245676
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-785455 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6c930ec3-94b6-4b5c-acb7-f9c8c9b2bc26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6c930ec3-94b6-4b5c-acb7-f9c8c9b2bc26] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.017953129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-785455 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-785455 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-785455 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-785455 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-785455 --alsologtostderr -v=3: (11.943775005s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455: exit status 7 (59.846072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-785455 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-785455 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-785455 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m35.884477363s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-245676 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-245676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245676 -n newest-cni-245676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245676 -n newest-cni-245676: exit status 2 (289.349663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-245676 -n newest-cni-245676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-245676 -n newest-cni-245676: exit status 2 (297.226841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-245676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-245676 -n newest-cni-245676
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-245676 -n newest-cni-245676
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-403460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 11:32:48.723326   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-403460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m9.708399028s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-403460 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8d74f4c-5373-470b-bc68-9e10da620ff3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8d74f4c-5373-470b-bc68-9e10da620ff3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.015613334s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-403460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-403460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-403460 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-403460 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-403460 --alsologtostderr -v=3: (11.867212302s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403460 -n embed-certs-403460
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403460 -n embed-certs-403460: exit status 7 (60.703632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-403460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (341.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-403460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0731 11:33:46.147914   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-403460 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m41.351254761s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-403460 -n embed-certs-403460
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (341.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lqphm" [9becb480-be5c-4249-9e4c-98f69b96aac9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lqphm" [9becb480-be5c-4249-9e4c-98f69b96aac9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.020607663s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lqphm" [9becb480-be5c-4249-9e4c-98f69b96aac9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00942374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-746597 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-746597 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-746597 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-746597 -n no-preload-746597
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-746597 -n no-preload-746597: exit status 2 (290.063563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-746597 -n no-preload-746597
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-746597 -n no-preload-746597: exit status 2 (288.547084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-746597 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-746597 -n no-preload-746597
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-746597 -n no-preload-746597
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0731 11:35:51.767469   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
E0731 11:35:53.194431   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/ingress-addon-legacy-033299/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m7.290990048s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-24nzp" [4eeca21d-8902-454f-85b0-437de8103f6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-24nzp" [4eeca21d-8902-454f-85b0-437de8103f6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.009472211s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.67255264s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ht2vp" [4849e4d1-fba0-4184-9cd0-2b330f52e242] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015313759s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ht2vp" [4849e4d1-fba0-4184-9cd0-2b330f52e242] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008874244s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-597674 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-597674 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-597674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-597674 -n old-k8s-version-597674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-597674 -n old-k8s-version-597674: exit status 2 (302.182174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-597674 -n old-k8s-version-597674
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-597674 -n old-k8s-version-597674: exit status 2 (302.137107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-597674 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-597674 -n old-k8s-version-597674
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-597674 -n old-k8s-version-597674
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.829042685s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-64h6c" [c0251a8e-8ff8-4874-9d9d-6cf1e5cf09af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-64h6c" [c0251a8e-8ff8-4874-9d9d-6cf1e5cf09af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.122080314s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-64h6c" [c0251a8e-8ff8-4874-9d9d-6cf1e5cf09af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009750665s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-785455 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-785455 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-785455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455: exit status 2 (303.650267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455: exit status 2 (276.637402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-785455 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-785455 -n default-k8s-diff-port-785455
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0731 11:37:48.723117   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/addons-650980/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.980047075s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q668p" [0906412f-2e87-4513-ab9d-08b752adb8f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.03045746s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-cpdbj" [76b1925b-ba07-4779-97f5-f64d41bf18f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-cpdbj" [76b1925b-ba07-4779-97f5-f64d41bf18f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.011286014s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rsd8f" [c9f6d624-8d0a-4f79-bdb8-f768d11579ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024393855s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-m6dn4" [a8e3c71b-e274-4373-885d-f241071f7324] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-m6dn4" [a8e3c71b-e274-4373-885d-f241071f7324] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.011651828s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.210490341s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sgmrz" [5ac404a9-12a8-4fd8-aed7-3b41bfb381d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 11:38:46.148291   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/functional-671868/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-sgmrz" [5ac404a9-12a8-4fd8-aed7-3b41bfb381d4] Running
E0731 11:38:50.055261   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.060492   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.070789   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.091603   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.131852   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.212108   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.372235   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:50.692621   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:51.333475   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:38:52.614622   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.009344165s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0731 11:39:00.295513   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.063359318s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-g97mr" [47f02b05-33ba-47f4-844a-16b22b3ee20a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0731 11:39:10.537921   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-g97mr" [47f02b05-33ba-47f4-844a-16b22b3ee20a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.019095833s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-359242 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (36.577779463s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-g97mr" [47f02b05-33ba-47f4-844a-16b22b3ee20a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010500554s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-403460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-403460 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-403460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403460 -n embed-certs-403460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403460 -n embed-certs-403460: exit status 2 (275.246592ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403460 -n embed-certs-403460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403460 -n embed-certs-403460: exit status 2 (309.489359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-403460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-403460 -n embed-certs-403460
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-403460 -n embed-certs-403460
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)
E0731 11:39:31.018918   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
E0731 11:39:37.916085   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:37.921328   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:37.931574   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:37.951846   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:37.992196   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:38.072492   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:38.233067   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:38.554200   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:39.194593   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:40.475289   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:43.036201   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
E0731 11:39:48.156654   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-7zs5x" [469c6260-327b-42c8-bf85-dbcd812301cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-7zs5x" [469c6260-327b-42c8-bf85-dbcd812301cd] Running
E0731 11:39:58.397656   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/old-k8s-version-597674/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007686239s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p2cj6" [f0642559-1d93-486d-b1a8-f9dd50c2ca9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-p2cj6" [f0642559-1d93-486d-b1a8-f9dd50c2ca9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.009602857s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-964ps" [8786aabb-c2c6-4004-b97b-c892b51aba1a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017141182s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-359242 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-359242 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fvqcp" [10ac28ab-6564-4e72-aa93-3c4d90fc0505] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fvqcp" [10ac28ab-6564-4e72-aa93-3c4d90fc0505] Running
E0731 11:40:11.980067   15646 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/no-preload-746597/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.009911404s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-359242 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-359242 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    

Test skip (24/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-931837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-931837
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-359242 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-735531
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:25:50 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-182090
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-165255
contexts:
- context:
cluster: cert-expiration-735531
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: cert-expiration-735531
name: cert-expiration-735531
- context:
cluster: kubernetes-upgrade-182090
user: kubernetes-upgrade-182090
name: kubernetes-upgrade-182090
- context:
cluster: pause-165255
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: pause-165255
name: pause-165255
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-735531
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.key
- name: kubernetes-upgrade-182090
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.key
- name: pause-165255
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-359242

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-359242"

                                                
                                                
----------------------- debugLogs end: kubenet-359242 [took: 2.56959659s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-359242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-359242
--- SKIP: TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-359242 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-359242" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-735531
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:25:50 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-182090
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16968-8855/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-165255
contexts:
- context:
cluster: cert-expiration-735531
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:26:42 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: cert-expiration-735531
name: cert-expiration-735531
- context:
cluster: kubernetes-upgrade-182090
user: kubernetes-upgrade-182090
name: kubernetes-upgrade-182090
- context:
cluster: pause-165255
extensions:
- extension:
last-update: Mon, 31 Jul 2023 11:27:12 UTC
provider: minikube.sigs.k8s.io
version: v1.31.1
name: context_info
namespace: default
user: pause-165255
name: pause-165255
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-735531
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/cert-expiration-735531/client.key
- name: kubernetes-upgrade-182090
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/kubernetes-upgrade-182090/client.key
- name: pause-165255
user:
client-certificate: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.crt
client-key: /home/jenkins/minikube-integration/16968-8855/.minikube/profiles/pause-165255/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-359242

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-359242" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-359242"

                                                
                                                
----------------------- debugLogs end: cilium-359242 [took: 3.187053979s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-359242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-359242
--- SKIP: TestNetworkPlugins/group/cilium (3.33s)

                                                
                                    
Copied to clipboard